var/home/core/zuul-output/0000755000175000017500000000000015133643663014537 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015133647542015503 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000235001315133647467020272 0ustar corecore7Ooikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD 2|V1i.߷;U/;?FެxۻfW޾n^X/ixK|1Ool_~yyiw|zxV^֯v5gCh31 )Kh3i J1hG{aD4iӌçN/e] o;iF]u54!h/9Y@$9GAOI=2,!N{\00{B"唄(".V.U) _.f*g,Z0>?<;~9.뙘 vKAb;-$JRPţ*描Լf^`iwoW~wSL2uQO)qai]>yE*,?k 9Z29}}(4ҲIFyG -^W6yY<*uvf d |TRZ;j?| |!I糓 sw`{s0Aȶ9W E%*mG:tëoG(;h0!}qfJz硂Ϧ4Ck9]٣Z%T%x~5r.N`$g`Խ!:*Wni|QXj0NbYe獸]fNdƭwq <ć;_ʧNs9[(=!@Q,}s=LN YlYd'Z;o.K'[-הp|A*Z*}QJ0SqAYE0i5P-$̿<_d^"]}Z|-5rC wjof'(%*݅^J">CMMQQ؏*ΧL ߁NPi?$;g&立q^-:}KA8Nnn6C;XHK:lL4Aْ .vqHP"P.dTrcD Yjz_aL_8};\N<:R€ N0RQ⚮FkeZ< )VCRQrC|}nw_~ܥ0~fgKAw^};fs)1K MޠPBUB1J{Ⱦ79`®3uO0T-Oy+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & "> 3J?5OͩLH.:;ߡ֖QʡCOx]*9W C;6)SCVOאUʇq )$ {SG!pN7,/M(.ΰdƛޜP16$ c:!%Piocej_H!CEF L훨bِp{!*({bʂAtĘ5dw9}ŒEanvVZ?C}!w,ƍͩ?9} [oF2(Y}Q7^{E}xA|AŜt;y}=W<*e'&Ж0(ݕ`{az^su/x)W>OK(BSsǽҰ%>kh5nIYk'LVc(a<1mCޢmp.֣?5t罦X[nMcow&|||x:k/.EoV%#?%W۱`3fs䓯ҴgqmubIfp$HhtLzܝ6rq/nLN?2Ǒ|;C@,UѩJ:|n^/GSZ;m#Nvd?PqTcLQMhg:F[bTm!V`AqPaPheUJ& z?NwpGj{VjQS,؃I'[y~EQ(S +mpN, Mq 70eP/d bP6k:Rǜ%V1Ȁ Z(Q:IZaP,MI6o ޞ22ݡjR:g?m@ڤB^dh NS߿c9e#C _-XѪ;Ʃ2tStΆ,~Lp`-;uIBqBVlU_~F_+ERz#{)@o\!@q['&&$"THl#d0 %L+`8OҚƞ`wF~;~pkѽ)'cL@i]<ք6ym®Yi&s`dyMX](^!#h k:U7Uv7쿻чd)wB5v-)s蓍\>S[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTv)vtT̅Rhɇ ќuގ¢6}#LpFD58LQ LvqZDOF_[2ah?lm$K/$s_. WM]̍"W%`lO2-"ew@E=0D"\KjPQ>Y{Ÿ>14`SČ.HPdp12 (7 _:+$ߗv{wzM$VbήdsOw<}#b[E7imH'Y`;5{$ь'gISzp; AQvDIyHc<槔w w?38v?Lsb s "NDr3\{J KP/ߢ/emPW֦?>Y5p&nr0:9%Ws$Wc0FS=>Qp:!DE5^9-0 R2ڲ]ew۵jI\'iħ1 {\FPG"$$ {+!˨?EP' =@~edF \r!٤ã_e=P1W3c +A)9V ]rVmeK\4? 8'*MTox6[qn2XwK\^-ޖA2U]E_Dm5^"d*MQǜq؈f+C/tfRxeKboc5Iv{K TV}uuyk s" &ﱏҞO/ont~]5\ʅSHwӍq6Ung'!! e#@\YV,4&`-6 E=߶EYE=P?~݆]Ōvton5 lvǫV*k*5]^RFlj]R#Uz |wmTeM kuu8@8/X[1fiMiT+9[ŗ6 BN=rR60#tE#u2k *+e7[YU6Msj$wբh+8kMZY9X\u7Kp:׽ ^҃5M>!6~ö9M( Pnuݮ)`Q6eMӁKzFZf;5IW1i[xU 0FPM]gl}>6sUDO5f p6mD[%ZZvm̓'!n&.TU n$%rIwP(fwnv :Nb=X~ax`;Vw}wvRS1q!z989ep 5w%ZU.]5`s=r&v2FaUM 6/"IiBSpp3n_9>Byݝ0_5bZ8ւ 6{Sf觋-V=Oߖm!6jm3Kx6BDhvzZn8hSlz z6^Q1* _> 8A@>!a:dC<mWu[7-D[9)/*˸PP!j-7BtK|VXnT&eZc~=31mס̈'K^r,W˲vtv|,SԽ[qɑ)6&vד4G&%JLi[? 1A ۥ͟յt9 ",@9 P==s 0py(nWDwpɡ`i?E1Q!:5*6@q\\YWTk sspww0SZ2, uvao=\Sl Uݚu@$Pup՗з҃TXskwqRtYڢLhw KO5C\-&-qQ4Mv8pS俺kCߤ`ZnTV*P,rq<-mOK[[ߢm۽ȑt^, tJbظ&Pg%㢒\QS܁vn` *3UP0Sp8:>m(Zx ,c|!0=0{ P*27ެT|A_mnZ7sDbyT'77J6:ѩ> EKud^5+mn(fnc.^xt4gD638L"!}LpInTeD_1ZrbkI%8zPU:LNTPlI&N:o&2BVb+uxZ`v?7"I8hp A&?a(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òƓVgzEY9[Nj_vZ :jJ2^b_ F w#X6Sho禮<u8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfՋ40oW&&ף \9ys8;ӷL:@۬˨vvn/sc}2N1DDa(kx.L(f"-Da +iP^]OrwY~fwA#ٔ!:*땽Zp!{g4څZtu\1!ѨW(7qZcpL)ύ-G~^rFD+"?_h)yh=x>5ܙQ~O_e琇HBzI7*-Oi* VšPȰһ8hBőa^mX%SHR Fp)$J7A3&ojp/68uK͌iΙINmq&} O L-\ n4f/uc:7k]4p8wWLeUc.)#/udoz$} _3V6UݎvxyRC%ƚq5Щ/ۅw* CVo-1딆~ZYfJ"ou1ϵ5E bQ2mOΏ+w_eaxxOq:ym\q!<'J[FJ,4N:=6. +;$v6"I7%#CLTLyi{+ɠ^^fRa6ܮIN ޖ:DMz'rx#~w7U6=S0+ň+[Miw(W6 ]6ȧyԋ4ԙ./_A9B_-Z\PM `iĸ&^Ut (6{\٢K 5XGU/m >6JXa5FA@ q}4BooRe&#c5t'B6Ni/~?aX9QR5'%9hb,dsPn2Y??N M<0YaXJ)?ѧ| ;&kEYhjo?BOy)O˧?GϧmI C6HJ{jc kkA ~u?u7<?gd iAe1YB siҷ,vm}S|z(N%Wг5=08`S*՟݃*־%NǸ*kb05 V8[l?W]^@G:{N-i bɵFWǙ*+Ss*iނL8GwMm[eG`̵E$uLrk-$_{$# $B*hN/ٟCZ]DaUS@''mhSt6"+ҶT M6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3ӆv%#oV}N~ˊc,_,=COU C],Ϣa!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_f&OC"msht: rF<SYi&It1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]y#z18m@n1YHR=53hHT( Q(e@-#!'^AK$wTg1!H$|HBTf̋ Y@Mwq[Fī h[W,Ê=j8&d ԋU.I{7O=%iG|xqBչ̋@1+^.r%V12, _&/j"2@+ wm 4\xNtˆ;1ditQyc,m+-!sFɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+"/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,ŏd7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo_nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_MqKJ> QA^"nYG0_8`N 7{Puٽ/}3ymGqF8RŔ.MMWrO»HzC7ݴLLƓxxi2mW4*@`tF)Ċ+@@twml"Ms>\΋"?|NKfֱn !s׭dֲcUh=Ɩ9b&2} -/f;M.~dhÓ5¨LIa6PnzɗBQiG'CXt!*<0U-(qc;}*CiKe@p&Em&x!i6ٱ˭K& FCfJ9%ٕQ·BD-]R1#]TROr}S [;Zcq6xMY 6seAU9c>Xf~TTX)QӅtӚe~=WtX-sJb?U'3X7J4l+Cj%LPFxŰAVG Y%.9Vnd8? ǫjU3k%E)OD:"Ϳ%E)=}l/'O"Q_4ILAٍKK7'lWQVm0c:%UEhZ].1lcazn2ͦ_DQP/2 re%_bR~r9_7*vrv |S.Z!rV%¢EN$i^B^rX؆ z1ǡXtiK`uk&LO./!Z&p:ˏ!_B{{s1>"=b'K=}|+: :8au"N@#=Ugzy]sTv||Aec Xi.gL'—Ʃb4AUqػ< &}BIrwZ\"t%>6ES5oaPqobb,v 2w s1,jX4W->L!NUy*Gݓ KmmlTbc[O`uxOp  |T!|ik3cL_ AvG i\fs$<;uI\XAV{ˍlJsŅjЙNhwfG8>Vڇg18 O3E*dt:|X`Z)|z&V*"9U_R=Wd<)tc(߯)Y]g5>.1C( .K3g&_P9&`|8|Ldl?6o AMҪ1EzyNAtRuxyn\]q_ߍ&zk.)Eu{_rjuWݚ;*6mMq!R{QWR=oVbmyanUn.Uqsy.?W8 r[zW*8nؿ[;vmcoW]"U;gm>1ZY 7dH1Z8H-^1e'yeI,!?YXV+lx)RRfb-I7p)3XɯEr^,bfbKJ'@hX><[@ ,&,]$*բk-Yv5 '1T9!(*t 0'b@񲱥-kc6V:d ZZ\C| fD>hB֡#-$+Jpሟ,Cg:6 3 xH "}C[`ӨOAFn5ʬLHϰ:N@VcyBI#Dr. "h hg ۃm-qu>V&൘ G7qi#^tҒ[JI!{q*lrD܇Gk@;oI<5xZ4xM"؇'k!>V|lk'KX9vW !&H2kVyKZt<cm^cΒVz]T.C$cEp._0M`AlF̤@U' u,—rw=3}resLV&ԙy=Ejl1#XX۾;R;+[$4pjfљ lݍ3)`xvcZRT\%fNV Q)nsX }plMa~;Wi+f{v%Ζ/K 8WPll{f_WJ|8(A ä>nl"jF;/-R9~ {^'##AA:s`uih F% [U۴"qkjXS~+(f?TT)*qy+QR"tJ8۷)'3J1>pnVGITq3J&J0CQ v&P_񾅶X/)T/ϧ+GJzApU]<:Yn\~%&58IS)`0効<9ViCbw!bX%E+o*ƾtNU*v-zߞϢ +4 {e6J697@28MZXc Ub+A_Aܲ'SoO1ۀS`*f'r[8ݝYvjҩJ;}]|Bޙǖߔ 3\ a-`slԵ怕e7ːزoW|A\Qu&'9~ l|`pΕ [Q =r#vQu0 M.1%]vRat'IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁6y\QbR9GuB/S5^fa;N(hz)}_vq@nu@$_DVH|08W12e_ʿd{xlzUܝlNDU j>zƖݗ&!jC`@ qэ-V Rt2m%K6dX)"]lj齔{oY:8VmS!:Wh#O0} :OVGL.xllT_oqqqLec2p;Ndck[ Rh6T#0H Q}ppS@ώ@#gƖ8sѹ e^ CZLu+."T#yrHhlكʼE-X'I^=bKߙԘ1"+< gb`[c1髰?(o$[eR6uOœ-m~)-&>883\6y 8V -qrG]~.3jsqY~ sjZ+9[rAJsT=~#02ݬf¸9Xe>sY~ ae9} x* zjC.5Wg󵸊y!1U:pU!ƔCm-7^w]斻~[hW$k sE0ڊSq:+EKٕ|dvvjjy6 æ/ML-yz,ZlQ^oAn-})xǺǍ--qcl:WLg ӁvJ[ǧc~Of+8qpçco#rCtKӫce0!Y-+cxMK-H_2:Uu*corD~@N`#m~R:ߙ歼!IZ5>H;0ޤ:\Tq]_\_>e˲̿[pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$]oȒW/īy0/Ny`Ab7%yl緪Iɲce(V&aׯutWBƴMRQ4I'Jd_&:H6E+}UbN5%}Յ$ QI#JjyTھ)WEe*L뽵Haoմf}ܚʺ'<(_jsDUa D${k)*"nV3ľUT]{>U^7(Sz2h"ݻ d_>&Wj Vxa1K'&1qY^2kSb'W~er;iqHf2SRmlNIS7(ƗeVdYWȽ"y422#Iyq,?v|?/#cysɜ rB}Fzx?ci;-Xh*礟o/U:煾ǧ"I_20޺C fN#8d(>/Z~wDȔ#ak*-"NI@eG\=ዊz>?+= ݀췔Fbͨ@yL2e"ADS)%0)^'@Q=/'G(EG\ kT2RQ:Uwxt.h4errGwEՓD=%xx%ZW|KoNFqQ2x$V ;,ٚ&5D[w]KIXg]w9Uq3~$MQ̊ iĭw?Y4 @2_r1 v1T|Mϓ¹|Da+k9jGx[w?<)<jZY?NHȣ>?Z G?uWƳmYF)zdqJ%n^ |},՚wNT #+&z|V@8 =T_"P䌆^sQjKT3[}KфLrټ;=9:;~.҂V6ihsWy)0M u^!Bb=2PK5>a,#'p>[%&sk+cwz#DXXrxPW#!hy3y\&mJ\r0-'H&K~%kJ!L;[䒪/K?es7s5e`/s?mpʻW}- `nyx;z}b<<N4rM6wqTI(n !@ޜVT߽J ]|V#4ۗ#B sw@xjiZ\JaycI&u] ݼȯ xxFqݷbY#K#‹+FdIsx`DYP8X`ӍK9AӚ(1;3k#)ld`RU H{ /?%DȢX?T4m~CLu`RVE40C&ωA -=_coΈ$Dk'7hʊ_ݶHH=О"CA2DVh,kRRl.MbNilSG *cDZ&>F)9kOU=K/ ~<$zxt㢊Cl<8uزFi[$V|߄ڗ")ڄl쑐1oJ{ հty[ ƒ #7g+kIe5ܔzuʓʵlHLp[Ix^'{=̔T} `5ʖ:/.X\z.#VSDs\1/Z^HYg-x N@Fa{Γ ײZ,1MD̈Olsuc⦆gپYDf$'[ :i9Bu5ao,xLm:/U{f`ohfѨN6F:"x t>ɖ:_?y2H!ᴸEdZGw&Hy]d &mV-"ld4 \` LoiK+( r -5,4'I2MWOsU%[0jQ<[@M\e "^(i.~APPmmb hY38^.Ӷ& D q pWىq7\yKD<7gސ<<\]sEXFQfݕcEX\yvi[}@t.H.6 2ތI,hjt>:-BBOՃՐ6ΖXozXXrT`kIl'ySʆE$mEXZP=bfZ3uǛbxW.t1mCԝBTj+ǖ-[QKs*R9㿫f%v=|2Ǻǫ3IQNa5 MةUCe2ԓ$b9tRFUutmQdFaEC!f`0Jqc`Ã/-VT4us1)XcxX.sy/%m0ݩ,жQv=>š ]y3]g&5_KmU:x.~]sK2jI@=@LyU:7T]]oReFSQSd4 jC \zmn!ݵq}q$ 6 8\2]RDb2Z5u Σ't *uV D?Y Tu!]ݱե"16۝vu͋<в--uV}tX5?ۈ@̀(3|QhԬQTdұD@ >.4 q5ʵ|t98"[w+1=|xh$L]~-eY3S_ +}𮹱mҰ\r3 >P|>U[&0Gf0RŁG`0RŁ?2Ja;Ǐ#ɜޭh3Y?Qb`V K@-PZ~Spy3"ܕ&ݙWѣ٘;$;2* ;3jrwd@ ve02s{ n5&@|(B:@S@?m=Vx8C0e;uO> +TGv7nLD$N@4̕1̀aNgJŘE0ϔ]N 0L.xa>a`|@e`3l5 c搊8 |@[3!;SαL5XA.AZ R} Y%#@kG!-"6-e: JGBڱ) *脛-@io8 VRU M%oZDas/P ,O톕I0 G#ѕ|7u*D HP n!R @mu( Z|Fg㋍l L$1h1c% N߈]?|åc j+0pue j`xhkGP[sFE$XGN3p57B~g xU?WWɒ`bs fA| 7x uŷAp 3rn  nw\2NX+1{rP.9 ǪFŧ0;͹%[0 dy.7qas)'hadnxrAV-. `l9z"nT}:eFoh1.e Eo2ѫ2z AF[qnj wŏ(y{{~ծ3p A' aKK$X"!lک!6yv/Qݖ2m]gF!\sBx^'? nH_:4I&6S}ܗ{AvIsdN@^kjK$i7N ]Bٔ_~^ZY<~&8"E#JO*N9QpUp$2eT]5#_ yQ ,[AbI`]# dI %tEbs@Y_Y 攻={ {Q AR1]OG%""B4,ׄcU "':sW,WBJ-N],0Å8"JU[*u6r-WT`MԢ.b ;1ُWi0ׄam V0Y/^ b F"}.˝qrxrk:IUx0y?>DpK] ۸Τz)8n* :/,2m=7|=ۍ8ə\p vYzCߡQs\!<ԁggN7xNzQC'I 5R<VOuḙj67\x0M2C^D~*5TzCt`M;G]džm^6e/]wAs?z8e+V i1ٞ|.3nf#4v=MmMax_e 8]i3}9rm|x4!EbxWi256H8duâ~~$h G0%a: i@.YyŅ. [^6A(D =\Yަ[izkyӐGYᄰ%ЎA]P lm A?+זfAҳ+;V ( mYɐ?Mͤ:[zTBI(W {Zy pyvW|&iVv+EW|7n6#KSig]zC }f\~+9'1@[vσs]A *IIB^n!-ʶ mO({lGB6>RM:=WٮʶׇUҵO۝;w-ʟF*ߑR2j?P{GB-u u ٞPi:;lAX&TlA؞P4BŎ-u u ݞPi;nALzO#ۑPo BeB-'[,lAh= v$4XC.60} PI~;! [uG,֫I~1Άѵ<Ճe&+-C3"*H|oTR*o>,Hz#-x`ʳ-__JsH=rFѣ~5W#ѕ +OUAWFӋ,KCC.q*' Eٷ<mif/?IeIc]~[wAmũ飺ja)i~*S@CD[YNM0ۄC!z$_% 4a nYD9Z{ o*}á$n @gq:{ӂxCxL[zhqÇ3\?ޢG*@X̒eEd??PZpgƩTEt5s\.@]H 9)Qf .XMٯpa􇢊v}tG'{6 Qg)h3o7 XfuçNy,C1-\UH\/T8[XB}P^mo9*Ke6KXN_d-Ɍ3Y/5! (}<e7]J VOߓ]B{w3ĞՑ#Ğ 5([#!P|T9ȁ?6 WBZU%6ak /E1U}Aa8hC͋=UC2$Mt3@H"-\U^zR6ECJ302 3=0p++23i/$6*3CfXDS^m_GS3H^6-r4e$_{1$4Ҍw;ƨNB$EcOL;!W$ّ49NCH`ɼT+=شTƆEU;q09]&ZI) H9i[/°g7Yh+sYe y $qeT}mֽXZQ2J r=e\,dHF6$Jyklq ]g4sTi .]yaDi=EUs%nAwT^KOe9\Jy4驪xd366&Cjo,! ȇk!Y ҭzV L{VY&.nb8ѧCf}e\%@Vyj6_Kދ^evGYz>df|beB"w7 <$%T)N!J >?['KǎvLֳCE.ZԆdOJM&w yHf2RݓJe@G>GLa&K7;&6cHO\RW;fapn\ly3Q;:ߥ&qCu$lx #8rm?xM}ڃI)B.@`jesK/IpbtPheՏa@{X}Oyx4TuQgh!QU87P$(;bg `jkv`]icaG68q{#t'^j BKz)elD%H:S=/ܕ8&q#{-,ŹD$؏VIEe)2%&"^W!H:;W3pz:!7UDnCHw@zKQV%md`ExdttTL*Ia]0UC*_~h=dzw:%a!!D c=a&TQİU9ҋ;ŗ>00vR-%0= vaV*#Mtf;s|a87+VzǦV!@j $1kdEkZBjj$+kV< gNYJ,$}8Ur<;k:tdՈ,cF\Em;D4g&w(GĎ+l#{bM54%iJw4=L)}͘10[Ʊ jy-ISk츠C3\nF~3xejEY1RqcHնua(>,V}Gԭ|& v.EZgL\ F!YcPzAM{Ýc iuvN?V}RxLPu`0k(,l{07 |VHaeHŰ_nrC, lh,x M D98GIg縭,:&A. 8R8b: q^˚]9p B_#% _U`9O([dC}iעXEi*+ȫ,:Uc~VФSDD/Gʪ\gHf]xG_\ktnu_& .Gd|LK$Lk- &^ZX SJQ{c51Hy𘀩1YI/E/C9N .CMa86&ګE tb@BtHo޲8Ҵ s^3^uAOT4/HM1bRL͂Zԥ|r f`ʎc${USwR enGU ,()r*0dRbRMaFc>;m+_EY }]~͢ڜL`qd!7Ďw*Bj}Id)>E ^:R5=>5iw4?͡:Fb=K$AzJuhMzKmb99gzpV}Ux\+b3zuDF9SUR0(cVabYlcdM(FCyTwJ^]rxAa4=X୏TP$3t} ,8NpM.~]sg;4,W:fb9xq̩y[sj;4cU 1iNi9$NY&RCr1A3Zūp&lTd b!18m.㿶z<9e}«ֵǔr"$߃fK(lI89ЅWCΣ$-(b(bf twm=?ɩjگxDUq,(t 4>ʉv^$"P+3 UY#N7Q$`YpnŔb<ѿܲ8->3a#ݲBh>JXJԥ (>ZJ_TՐ0x贝lx7~\BdZ6"/ 4|h>n%pU%E&RhxþYFK$0" "*"E΀`ԶUe2]}"[ P+kG_S:(ʂfy?-&A.͝.(ӏgݸ v6mZ x Rn"XKgr~a)l>\E.~ɴq4Ȇ e 'FY,"x*IgZ'_Z~q%apNK݀=ɤ 1H͘iiUu#qM1z0BC F3d Һ)L}s0Ì8b[6sQzgW>XB>X>Lf F8 G~,8N^Z~H2%:PS%2_+""! E3 K)D~W@OMnIh>x0 (Uy /,NIk-+R^|Zjb1Ϭɤ1/1}A/j7QI9V2iz<<Yp<${)BɾO:rZH>e%aE5<+bՖ)ٔ>l^z658eippu6Z(h]4j1Eycķ{bQ}꽗v7⚗vyvV#j]{X7G'oNƘ?3%h5x D*TA)3v9Ϝaq/F 7: ק;_(?yJ/U>|mfu{0 6# eF 0nJ#C &|Q"c_n֛w.MBŘ0Bn %3XTI'H:ow['$;ܧ>5E2+DJǮ7k,Y` f>UZ[j}Y] ]#q13q&-jiTCITCfvE_ECec#kp{heHDČֽm7=Sz`+ǃ/]Ck!Pa{۸Wv-$55GSϭ^=z|{Y,m ͈B6s:P7=?9}oاM;Q E䧰Ll ˲L3QAT|ȽUmr#}3G堟e{׬ku͖]? ==0ge\׌<u}q{臗mlsg!Z: B a;?ϕ 8VA~Xh)/~\23{~|NqL\@"@%|Y1FOP.fnR' /6Or1f&G,mӓWt؋Y2Ec / c{AZOu7~XMѨY4Qל|/I[E{]9h$,EgҨ+4PS?dMDθ]$K2z4cyyyԐ]"#}5C`Y7@ G`Z ';~dM("49Rl]f14ǤhnjbJZT}*!ӈ~Hs~: HNw0X7zf3sE9:<L{ W{@%(qvn>]~v{ >ۣJ='1X)&W쿮_[Ãl.iCUa}uGQ;Qa›Q  ‚X׏m5]3ub9: @!́ FG}G56Gw^9>}{0Z+N`dc{Qu9Tv'xV}AOF7ɹk 1W 1M'wp0dh~'3`7ÿZXB&qDCCNEgzX?e^}ũZ Bw"nX(KݫJ4\9fMU"I`f7eΌMFTd2Ĭ08 pt#tj9v*92=)ru}WU&ի UZ iꆎBsN ļL@^jnRL_١3]7əsf6 ͆uWfcځvZ+$.xsT ZU}\WΦEM#ĥ - Q9ku!\Ld>zhU|5#. "n!NMieq]2“uD(~l(\cz=S{ZE̿um0ljW]ou9~xj/Wvts^マ0%h scK0sBx|9dT't{۳ A bxt׃ߛ6R&oU-y36uvXhurV(y$lu>3BX Ҥ_)*ob*&d~pn\*UUBGUpDVLvm3mkDNȀD-q= ҭ`B0&i\Hn)ZZ% Nҫ1s藼jդ) tR6@lЪ$ݠ2[1hDBi*"]*MZ0 bU-#X؁زVK[f-$4Inܠ[rѰHx Bjn#RZĊ$ʙU'Q+h ɺތ|,G~v,Uu*|v?)@X_Uşš>T &cF*b-G=cHy `eM| Em"Ipd(%{19Mk 6+#mquYs:UӓidNeuRT?R3r0nEwY͈gv끺U`AAW ?_,._^iQqxW樓 dy7;e/=G;t~h8 8\#B$QKʼnfN\ɫC{nE$#k-u"F'c ~76Hph̺Zq- s,P3R:s r Bhyn"RϔgQ꧀Ii,ddPgT;Ҽ\[)LmáNXIW-F;hYNp<94w(k&_s ^݆`3+JP<̅Y3qYV`wz,!"~g,ۍfIb6Y;lb|SƔ?`b(dJnH_ox|eza~L?܎d9QIY'Rh[o%*\ bs~јx7#tSs>얀?Dg]mJ7 ¼pwGn2p{8؍9O1W} NM<|ǿ\Tz6dNoO z~+ th v/-3upi\6cEWFTƝ7;.C݊d,rRPo0F;ȽVLR=Y.tIBrC&\./M;j$M;dHΒsη-#SI؀{b-q؜at(e c 8ZZ2vH2v~k0z_eD"/?k1:)Q]Mgm3s{N1d)zZB"na9ݖ!]egmӎڦ7 aw9 Ha1xIED8G .:G(/Q2>Lh5//I4牟׬n;yy0 Hp6і54,Ǿ\e_fugbtGfɦo& ?|t`N!Peu |Ce=s/'Gn;կC߷?aZuBltghɯ~8Qu5ZW]wj/ԌU?UYM߯JAS_ tJgZPAD\Ҭxj`uPo„hː =ݯ<*aG('Ѯ{?{ ?5; g'} W!sRԶ.]L\`,Jje_Z,&22s=1EvT]t'X"Kq2ӹ\Fniz$Nv1ɻ%{0,xGK:G`E@Q*?y+v^ĄԳ3,ʖ0C%A +"./g_VrU!ᴃhbT;o\;2L;t8>QMB ݸ>#[sٷ?ٷ?Nk,‚Z|%Y8rWJu+36i&qKiʫTh.ZXͦFhjмOmE26 0b̀.rQl) fI+\%kr``WZN<=肍l7Ge?7/cFNV\y)v~ćYrɾ/BhW  {C/|n+o⑼#qļÜ T` 3| D+|]1csޅ]կUߺ`.=IH읜`p2 ~;S2~"6@咁<m[ xdV"c%e[RopIF.")>%¤Ix4_TN6jP4kqto1Tpt;\*9=lT~UG Y1@f]ce!VHhn 8X$[q~h';S*[q x V"pB\\a܊s$J š+ )0bjԊ{'pnr)C9{^h¬72+*nl;rp';W()A S-cL3@xtZU+kinΥ ::)z,Y.iDPc0/]O,*qrl_ڨhpc>3)..}ɫŋ}/%1"-f'Hy y)Ls%` k FײKTH(R;s- `Dk@.Vm JAhfAKjT\h G|ᜧb\J(.\t Q 0N('ᯐID.5ꙕSߙb؄Zrt8`Ϟi0tPi ^AʎŴ0i:|km Ŏ$5SlMt@X1#,QT龼{a+&|+D *OUJ1:d$88pT׏/B>ѽL)I^[z5}_ہǓ 8wR1Lq$BH &01 fi%&,QXN2'&˴WI$FYL-Li&i(MʔU$6Ve"ˢdiґ|R ` ,0*LLR.(e6\8qfsi,&։|RXtPǀ1Xm)83$qSLILGYʄ$EPNq#gDTG)y&5|Edֶ8:`.],xDQ5$KB Rp7FwDn0[`+ T~;$iVfsؘѱcNF M!ĶV\Y>28q&!֜G9"渍KuD((ѱfHU'Lӊ0rt?J-iNv*bjH):D4O4Npݍؐd*qPZb\c`YL4xx~ӅJx:mOM*SFNs1b+.$%\؜ rJ< ꌓvHvH5_{s^,>h"ۋ23yR-; {$ 5â/eo߅Lh_u %Z =ӂTyn0NtOYB/,X paO-(WcYVY`[}V6#9CVFg (Y8&Ie."=Ҍ$NXK9b+}ȏTBWSJ9}e2oJ榚s7o7c3,n`.=$nd0As<$kNYqWAr+)3ZuۏxIv0罎p{.D pyX$oc8nH5ـ)˰´nIp7$)=w}b'\ѳ}-ъqd>\ͷr]h?ڎ׈+GS0{K ?\WC=uZ_!VP+>K7 1`HJ06OR E@tLp,KXkEǭ@sª X+պ_a-J~/=cD3r){Jj8Qs2r14b5K@Ri?i5z~pN{Zrs5Psx˄Pv 5RҾvJ_.XBy^h%Hz:/\;A?;_ـ1d)RS>~J1ER>>0;=~Uy'?}-[JS[S4CYUKuWuƶe2]apO'}`sZ j|J㭵VxZZWˇGQFTejD)H9Ã5rܗĹ"ޕ5fҶ?z SnQ)bY.ɲ?'`pV!u: 3~d/0+Veh?oOk2V TyJ<у+OvGJx#/ }L/s=fB KMk))N׍C>t,nE}GxM6ZOPBxFhI*~FAkE?$R;_e*P0Io:V1[|z8/gEל#Dox?\2P_QXiYr#]Lf#H xoMp7Y__TZGqxA"=Gqvm#$e\ j^EɝpQQGL b cw9mt9N|'MPRA^[rk{aɟUyX/;yϴZ1]ͧe{ނ9]%<%0ڐ I_npW0ac&Sko[2#ԟG|OZ^n pk'I4u.C/^z٘/&b A*$ilr 0`S|Z@@6oLxőf}Y0űԣnBLJ't3,r1Vny1Mxz)EL".oTG~7IuS$ܫ*Z LSw%2qJyXu6AAg "e\3]Q; 㠊JM;i/y|iTGy')]%]gj;c}tu>YQHܯdf#)Z m+3I4 ;]tGGoZ@t}UHP-L^Ҳ"PG@Uɛ}~9z&?Rէ+?Q:2ce})+;)\_w )aZjs$a5CL-c~w|;;iUfDm˾:(`+uھLҾ5G1TlEM]Rj.P#Q\>5C%0}by$%<C Eq^GNty'*FL zRw?wF}bK㻏:DT.l@UbT%%e[l>D"ȳ5 `5Oq$]\7wJmk_?Nf ?T{iqYVmZ~Z+ u EU35 "(*A،ׅfHJ\ Z-Z0F_vsE~0I#p#ֺ/zrЉBiB{(Xqs(W~CiZ+N4"'Sl>r-VXC7K9zָj45 dme\- 2 7#I!Ջ SubCp--q.F̩6<)!W xC,ѬeB O,fUB: VE _+h,l'}y}^{ҫ y#`*qm0T Lu?sKr/JyႰ40L1Z^5g c 2,mYP7{ "jtH<J~<J`lTK"r1bAiRuduY_`V:dO$mtU[M-~j7b׺oûo㋯wVN)jz}[;F8/n] ?, i~,]視QNPE0Mtu5Jm!{5!JͰ*'Ľ_R˒'o@̗!"ĊU1yE(M,iR^-YqggHg=} SR[j…FBd6Joԕ' +ނ(̉D|]ҁmZ-'R{0]UpwɎK#eQUKƕJܣx2 + ڏGO|t2Rv~(K8(7r B,V/6Fvk\ 8/P2P|Y^vֿ`jQ܌&뺪Ugؼ= 9tx;AVmo40c4Z01EY&"ىyԽ32^K.(U4HI$p6bc%H9CY( x6҄ ;qũ8 p]_ɺqߺiTH!xL@DCGE0S(Ykg*SNëG]F$]+O3}!aԂS2 DXz֧SGJS]Zi8k$e_X̋Kl9g(\zTJ:KhyznŲ~u,ZNGXQF1gc+TI>-&q4䵸3Y$ |iDN: ;3 Br:>S<#']v">BWrSNՁzI-Fq=T+h}i9vJnv[i$LO|qK/ӈ*a'#&O2+i:ttN7ĈSE2-ػ6n,WT}T?V2ݤfyq6D"^$[ t7͛V鞄qE89Rs$2DcwqۓeT8ߵÛHiQڭ B\ltQzGM%5+R;՟syI{7-o$` _!WuPrlvo~]3 _%V̞NP\K8;bB^j)]a`<2MLϊ6hw^RP|P_ߓd5r/yJPZJ@)dx4]IRFӿ,/c.;? 癯 iꊨ" 뀃EYHؗ^[E*痢ߧ7Z)?]ҏHQ*g؍Em5)9Ǯ=R]olqҒqȂj()M#=6JN{s>exnv-5fGFk"%BR㤂CQ Ƌ8LjdRC鈳ɝb 6)*:镥M^cOU3iЕR0xXd+‰II~rriY/n\ {sst:ZejrŜ|y0>O?14{9ƥ˵]CkVZ"lQvդʚ%/Al>8֚kɪJ6 1}&Jvl5~(*&խ8M^W|1zm~~0Z}>Oz~;_Mn^V;*pTooW\me>ET?wϏ<: 4 e@:`.qQw3.tAE7|& ~+gҽIZۛjo`*AmԿns |KTfFsQcafdj,1"@@&gmW>vjQEàjW}FY\E`K?_P i֖( 4}@4pJ"lc$f"Kl"W/͔ gcJ߭Zf;L]T)etBXt0mƕ|RHr.w:;X0:$Mkk$sMQJV<[b3\rc0ݥr' {*od:_޻qf~4vIB|cϘbCHK,PyPCJW5^Kt[(P=z__@a]^K3ôvce"at/nGf\ڝ+|DECwXv7~3~ [:RJ *B A0])X8Azg "mIvami.Ea%GV;4{YM kU/4jt=HkN,g׎y_c=H).)6kسI^vTzRs9](oQ3ͶRU/"fY^G!r;.HwVK^xݕǑ?Ud*"N/Vxs飹BlG2=M'tnҵ棧3+.*]<,ʸ[iL+LPDZBA*PvKu9ɕ}}U5N4޾+ ] J4_x<@{. sp(**c4XnwrlZqxWpvmgSh-v1E~ |iA_LV&1*+xU5$uY]!bOfgm)c[^nOr93R 8wrI qY/&^59:wg^s!0|P{!0ݵ:٦:K+NEL+W2 f%kCkwe;XqY?D]V !H$c &P"ZK`0:hU{PsN *~ E4р8yS5>N9rLv l"O6"Y]e=/sQe3^4 hQR<ֶLVĻ)eI(sjWZ)x9owK聬]#e yI'_ܑ2S,4XNG wPRsQ,-se)cqq5^[Sc(I,@ y.aC.3T4,t1Q-MPi-lIE đ[We!р`\Q8/F=CM,Sȇt&+䰻eQ'by" XX/}qvJ(rI]gt8Oݝr2 ѫH{̻u e|("hDE y`H /WVR0$:5eͪA:eR[]ns56ozOD}& Q[btކge9V€?YQ%az9Ry " #@p"X:x,YCΣ t;+i o9w#wēӵC x\^>#e( $\ilX)L܁cJY]A1ak7dJ0(CQ{Af4ϑ>9vOgV?kuV*Y'G_:oWvδ #?uVY;GV=7pϱ@qjA&b4" :NTjyh˭#T|xo~-8+M2B9YY'n^I9d;aN# |Z!*{dEx^H NFُ E=^TEW"&+bln)b,!sX Y5aMHNL1ܱʔjjS~e2d4@g(%Zdc] 61qmcdC7< U jG=X[^[G{oUqM@!ZܱB6i%[DxWpn:7 p%=WGdZ>3,˔ ^PG?9ĴZbCq^ET}dCퟘʨϋfh9x@W85G#7PB_vA` JaL)$PEd9#LB-9LslY]b0we]z΢޸{5Ajnoh2\Hř%O #9Y<\OdwE\.?18MŔG0쓞ֱO#3"3kopDY@7p!xnzFK0Gvfg4|Ȟwc\Yz]U޾A㱿O`6_g_嫎d)-qi Dù38-HAr ښٍ_a~9U~pN\)?$NŎߜfxD\r$Q*=9CZ?u7^TЙ!ߵs@׮+ ׾\.2 EY_$l @'CmD~+6h|dhj|r[AFzxd6͈}tkU/eX`ݮvyXU[ ATq_B _0e3r1s0:8Zҧ|2vM f61eG.?:b6lUkؚaAG x}EgXUPaR]KTBhό. Ř˱UZ,D MF:7Po8 J0sʭȭϰx`"a:W ~ yS Ieu->ᕳ#^Ot6J4Bq3vM s-Vs.% 5,층 U:uzlyw3`n_BTү.^L,M3,bWL 2U0VUr,:"r7W*M-zrIa#.|eC(F I _'+ &6zsy* Ch_LdU\{))+ ꁥ%A:`}ť!߶F~{>93Nr'wxx@q y7 @X:L_{Wl4q$9CE2'JMmD~jzăLsU*1=|efet. d v HVTq[eXwI =Ka7Vcmc?iʯ&e)逬PHR =Iŀm#]n8MּZHAԺqћ IYԛآ{2za`IotC!*zxd\B[l*. hcflOKT~ϛcϛ68fȘ؂9'b 01D(8_->m_ kWWk7}@hHg 27t$4n~ohSg;%YD`Oh,-b~)0* ppk{xd_+̼"TLku9c Ɖ!\ ׋|q*!Ĩ׋mǃg\Q œTC..oY {%Wֿ e{xd\~ZJO$q\Z*T`ra1b9T?% A.**)Ck K;l`ˈ"\Fs< Ĉ$AҚ |_*]; +Bz>ȸT8l=Y- aQR Gfe~UԷ<π1ڱ k|:Wt59,"a|'G{7Yʞ qdC@PָWEcfߔ˻u0A;{xdSj5o:V%]UQ0 A.9͈ {O3k.]Mr@@=C]޻s/5zǹsy|OgK[ +W$~YS D1F Vá}Mښ4`za~dVvaRPGbR?7*[ث caMЂdE:4; = -ԛ/4][ϼ]>8?YT4#W݁ Jhb*,+؅Dp w+t4nwI‘e5opD!ֺ>)0a21&o..'PRv* Y\dǦu]m>N0{h}~WuwDwofU> vqMX  ZkxJezتjhF9 <&r܋:Ar۱ ~VRwciv#4_ <)1][ F-S7¡NG5WҞyxtY P&z i3JHl#^G{όĦD qV-2@ CTa$qUV:%,/DTP*lA?u;my찚.k[n'6گ}&2hO7I)yzv'R9FcX#f"5ŧRU 9!fLx>6VUa?Mܮ\ψ =}R"SY|F/ +=y(yDB)'|ZkhO}w}`b:+c3gFD_-ޡ\O l_ٙ #X5A$W+XS>0;_v2YynNl_9^ƩT Rǎ,,bGu< .c' b5@Kyw}9-z~`8 lޗ\h2l->os[u82CaTzx|jx5ieXRMWU,aX$0WEVfz1-~k t@ۯ!rRkD䮘q?23 x8Xt$0٪_Fh>ySY_]e>Y?û*L+IKD&vL WM堰Z~ /K_6)[] F_&Ϗq2_]4#ΞUVϑ;CEEbs]h}[,++qsA{ޢGD܇b4_F!:W=[hd=~ >pl,l{۝xPo "f7AT^MG;|B=;s0aW:A.:Ȉ{@}$M̶y :=t J)EȂAZ}~6/HLk;K\QG,#-~{4APL2RPQPš 0 j;\b(4nh;ytD-HX#gb1XaMQpz9;~+цz(Q1f7ATt s\T:R^w|;>OGHWT+,"Y94Yp-7}ǁs'A|B0=7{t?kyDpM:$$:Wuʼ~ n. ᙽ'7r1S%M2xFsM"sdă !>+o)`{`b 月b qvc J}q4zE4hB!Cmf=$ݱ5F q8Uwֵ_Ǯ;ñC]3;Ai‡VxR"n#P'qD6EmJ YȀ#7O/[ή.DZN-XtVZ+nj0] =e0He?{b|F" GD EM%&eGQpL%?Zī#] ,:ʫI>YBT: y?B"+o 3|@N{_KCh3􏧈N;an:û2>6 =/UcwlS<]꿗%Ba]X6Vk.A K]i׸PVGp6< $&lIe3L}1Y~= qZH=jT"LaUVEE'?//s6rYa)?V}gԛ]F\5ϓW˯+zVbr/ӤMyVgQÞCEu2O{?MdQMo_qfdEtR McI>aI/ غNT 4" 2Sk$7[o' )B(BO[-^[:\Dz<կwRA{iQhpŽ/jcF( jU۸ 19,Ie39!=}8>!}#]k*#p{hNJ:"'Dx:&(.(F#M@HRRְi43h }Ahzv+1-x' ?"b>L(=Ҕ:q!hPbꢓMcİsL_Z2!)BeiHI vr|t:n`>xJJde q8%AޟOap,gAgw7?&8jJŠTĄL⥕Apb\pEiΞ+GVaC=z4nr$Ap삂7ȍ+&yayG}P)*-]#DL6OX&lr.@1WFr+ hR2TC/f7ATBKɄ2(Hq*dX2a%u_+xLaCC(h3 jT'Z х粞!;-^b4BEB,na$a!AD!#*­V q9oOw0ɆzwHlqJI'#JۨՇ $ؗ/Y|k[,y=3ݒ%V8P!{m]Aoo ǒJR&V? B|5TZe̼z,*[2ǀE458v\jax.ΨQPɁ2mџ=Y]x|%ܐ")sn*]|?K5k]H z[,Qh m k yo YJW` 9*hv[Xw]vkwݡq\0@0Yt&em`[x?67٢51gx75ӎ j ZYwNy$iXe-aoO7w@F?6_e\`JE2!_"NI\c.4#Z~f:A]~aڦC{$~q2<} w QJR静ԞԄT [}c,s!o[: ~!19rC[y%WsTxI8ٲ wL'Q6Ms|pP}@~0[&6 hm!1AJr YrRD OAF\D|ySQI_Xy =&c7>tIHq#X*iOMxxV3(թsݑMo BOokF$P8SX]J޺!=Vɸ:dk-:}jYpU--D6ppH̩xH2;v)!<Y`e2Z n/HmoE܃φ:{SC\Г_]GAlW~Mܶ_AL꒺l l`\0ZƼwȔD$c&DwBg3 6ԫ$6/ B 5FAL0߹ly Rĭ8mLK]Tfet֍,b|3M.GҬocϷa8b>P0}B)5k>.6d?D 8^0"gEPK}Ĭ7icM4mJ6U\1j]rG~PLZ,S1ThT1"ř#h;y/k fS0i{}OoFPg.c,<5"Ed6(4Pg8|wn "[=V0H`c R!e ƤC[7B|CC=}ZoT1y)sͤ<4Tc*feR eҙ d]Ȥ9]8}8ɻ0^ I!>T02tbzZ &͐+QF jyk(զ nb5b bm`ETTZќũ^KLi;~?}t)ti^'09%-E>kփhKw%8HjaʚniPm,e"[!ާ-(Oӌ'aq\ӌ]TJrpס_aEP"$*݀5!LP*!-V g.^rÜ 8"\h/LmJaU6Գ~XvhkZa2;FQ4CP Daw'QI9Q+c\wp▱1э7A$8%k8)Tgl-W㯶3ݲ0}Rr[H:s TN`l`{DP2JBf =%e(d6SB|* ~Yalv d_%> [Sδuk#obiq\پD<VyO+=i[&: q]-n(|{.tQ;w7!H>~u~>jt|:v{vĖ2B"MjFkZ(!m0RB .v!Ѵ{$@h{xyrׯX>@%݁< lAC>p^tչ`FnUa}α:;To!%9Yȩ]|z:]wu=8{J6"j^^Ba2^R+`NsU #YNS.n/= uv~1xIV |ugJNhp& E1؄BC,Y付 P{ c2b b>ƃC$2 ϵz?Jt7.*3R5޿ 6 S~QUʌ.RbC$)KpȲfD N{^* bp?ZAgěF1ʂ#&S$1bed0!;uuX *kcM1Thӑxl2%%v]=7#?(T娺(,}o4^4e7?- x0K)WzQR&WWWz~Uz2IIܩe{-ơrk ?f:vsB(== φV:l0ψ+qz2(#F"EG ^pr8^߃wz6n#5b #)"5RŹU:5RяQƃז W `!jE)ʋ-b!1%.HQjs낕pzj ׂB&؈1 PAN6|ݚ6AKa)u)u,E2 e4ML,@,xoWQPk,9Nȡ;8cZb*cld2=EbΰAd9lNi Y*֐zlq6֐I!)-s: gqkr2n 04;\ EiC:x&= (N)80޸6't\$:2n#E~z>m~;Nr8c<㱫TБfβ]-.f-ӂY~h4Dbs͸Z}e9DI/.+HKAi 6~=_go#J׻ RL}7 .mFҌjwYѻ|CM' Jq?N|3^%/ݴ^!/,r^'FU|~{wa??#+1 FIS6(<g&Tr]Eoz}'NV "(!ގ-o opVKV\~EGg}T&dy(ٿ&KY.8`5v_b-hTJ2 1 2C$8)u nbt*jEڂ@^C8gO+xEf}oq#ɿ"6=&40ؽd6A2Al6meIC&DQ.Z|x` jVu׫QJݟ Vp=1%w&b]kW8|`+}oM3t"xه2>wK)&7( Mg[wS8gojqr.},kUp >гrљH8&)u(H5iLbPIM'fnb>#X̧LØϣo|: D`"qFڂ2B#2uH} c\DJ-IF&б&6*I*ݾ=WmM>Ro1^YcF4I gh b͔T{ǦVxћ`+]f06\lЪە2(]?qJx ;}>bJ&*8)1<Ĺ)!O-Gr& oe2FaZ$ZbJt+  T>{E ؠQaԳFq$3+5nFEDFPkB3rvT=#}[eTg! k"e ^ZlO@z@) HHĊ#J=#GwߟQT N*_Ay}X>@GΑīkiզIQ`q ?NԷn D&4xp5KR42%{Fw nm"W ?:n#*V&s8$AKӐ8$1 "z@Ғ$M8KKaei?-lI[yTZIbھ ̽a<OH$M*_^(9h.2p:uw gD#g~#UFa#c>#JRx"P@7TШSZ9f/„Ua{5j}D 0zF9|ys(m 0zFmkͬE^ֻL]3 ڱTWPǸ=#^ˎpCiVNQ|%ͮ0{K*.h}AwWfwy7mXݨ$j@Ih-{4~7[v>]l>#f6[<ΦBPqAw|6>Yd-K 0zƏRcC^Cgt{g,BFaQ_WsNiJwY?q5NeI3;,0MvQ_]K3kIg _A+topu ]a mP3lk#{s#$*HEʶ=7xB?pٷ( U=8-/c[.eDP+/RR1t׉qU 6ioM^QP6 Ut22 u'0ܬH)X)Jk3n( (Y4APhZנzƖcT!wMνU7y$Zx0a 0zFNx>r̃| r:gΕ]SmT& = J"KQB#M/tRgV᧘{Dz&~ؗ'3^d!4a-QF 0zF :Eh EO7JO5$pI4~ҡ>C\U*_4;kImV*&F ոT!qqV+V yI8HM QRE=x`zgc^BmK/jj kmQ4fD(0|YIuJ81-gag{ۄtzIRf$ wpGP=~V܁\IewM*fVi y_/~@376|y[ޞ(pwPuR$>8XEP~ ءh<:en]ȋ DAu.&LJL%^(dʸR.M@zF>w!?@̷*>T-4QmwNBpW#J4IFE)=Fj"S@ldMEp8yp[)MЁHj&LhA)H4,NaO 9 ;,m'ΩXgNRl1#`6o%bð{Ɲ:D6iQq̵{Ɔ:`/Bh_p % Uc8 tZ<˒@=#K]VOtm.*=vo<,iQh3r:]eߕ;ozږG㤥g<}l#,bP-&;6g,[d$ A|t6ͧ #l6qv)1T[Qy{FCγ"`:IW@҈AY/rT(pNz<(?{9r{Scu+65))ZuT؄?Z^6xc:4`TئfeNfӸJ}蚈H["7ZaBJ `kU3` PxF N cŝrgV҆שwz*CI=`l=rc?Y7;EQj-plh9zMQ -EQ-1E 2<رjcٷC n"X/WdXIN|EZIP) :U64KRˈ6 )*:g7jԚ^B n> hM*n,nԪT*hsF`* +A"r巃S; $HU|WIE. | 5D~x/43}fӻ44!~?!ml;43?z;\M~:Oaj1?.-־z/(o'Ʊ7?&)mtox;ZI.A}c1+D6`qt]0b'lu"D ()q(GMv9/v4k:+z b[~../(Be?Y/Z)ы]ֶ{:}qxه7k`9ѼoJ7f #i6˶Ә|Vass8r%4w>d&-_\y*D w |^ZO``<$|m }4ڤIrf@bjEbokϛ?<mno`d C gfMɿ={g'U|5[ǿvG̷y_1'Z'}Jc$0 DhĔ) bVoL[3 8er j8R [($QBR%:"t&by{|P)P0KqdFsh ք2 tn<q `\$ ~Fq~X;Da}]ɿCX_A"nlK蔅"e=qMiKG}q(wڅE-uN,0: x"Npf " `H&3#nu1OKi`v}z&fr7-o8V| ]쓵BGʢfV#`+_x阗Iϱ$ + 890p'&J Mgu֎8<1n1'. u)QǓraI̝! mD0&gm":dW+jm=I]xu~ch\ٻ6$W isew|lKLҲ=C*VXI:t#u,HəN%]6 T&Bh5D,=Evń}Âiu=n?~A'݃]MS:/Kr/zܦG5/7D$52B1ȸo5EThR /!H!XQnjsFj){n ѿ9>\Dꏄʏ<- pliDN撢#o1~ |r[|&iNd?~o WNFe1 61>SX^DQFnO_\NM W$$B<0Q95&sr9ruF~~=b/v(Ek-8Z:gЄC0*f F8:ޯ?UAHځ/w[L1rLo/?l- hf'`{vޞ5M@wyіs%ӡWI5_?m~}Eevg牿ܜ;ut;h_:>l'`xjF" @|ܕ"|^fmSOGQۢ4~7d=}ؙ_OQYtXtD&H RO|T)&֜H'S k?cf+ԇ_>΃pդV)gb CblE*$h1蔸U9X2K6"kpP^X}ˑO͏z~s=r5yN`>@1s2K$XXDc^!tC8}GT;-3S,*4R% u!=XQAzX&dɘc8QpFL=34eAryͣ(`RUp0-'I$m6j EsdN]NR+#== #=fQVD2Vk9Em8.*FtAv4 sdk>V25Njlnza?em {k&f_v<05|.$>E-& {**ma_1%moh{N=TڞkaEbX@Zj0;PNiMqLC9k1W #)qE Ѭg N^~ս#"m8 V'c_\< ؉UF)Roa1s7,aFcBy BzpY35C-k<?LJDO'=k  nc9 ]MyCsC9nY Zc>.V1R0&nƔ ,{Pe)OxЫZEmzxT]1/B[xV(xqFEӮjª: gȖ"|U%ې{|<LvԜE+n2Sb/ڀƼvêCcQ8遽x+rvW:(v^"]oq0蓣uį2yU j cres q)+9[pTI~IyUkU;aSP6-0е"h9̕zEwX\Q(1V!%MRZ{&\](y'auR*Ccq˸—e3lt\35FJȍS/aG^Q-kzCXMDivۣ4 -E0ܘ{ĝ⚐EOx>%dO CZRVsb(ZxƐP1eWu̢erJ kW|r:EGˌe dm3a7f>axwQR |0E7UKz6z5cY<(GdWKa^j[O}-:=7et1^ c \1Uc\ר0SmE]-M7T 6OǬ{] +o\aԛa+/Gl`RX})oh?ݧu]xژ9JEFS *L)-0s|e1~b#< Oud-OO'Z|9$<#cd)X%j}+3Y栭k^gTw vY%kd*+Ir+0b CivK}/XzZ:Y#S^m&#<{\,fs8= `b^@Qu,'n,b5W7c8p9O*#kQBzUrݦJR8#LF9)gFPCS2Ϲ< };!*L s!%ScSv 0a_`|j{R4-éj]X^Ay/a5nXFmM7%kD&JCPD>jJ*ub誶6 &+(ڰx3!i) r,yDdh4?? .<,[|W08;xq$.l2 Tx$yח,G!{cY<4H"O;Dwa@< ,02"8FH) ԲFep(OWNQכq8Y;GP rK$2$ Ri %8t԰yh-$b(8'bqZYguT#_jAK E-2I$5FG.0b#)vU.@9TtL!!,^ Cx64+| ޜQ("(9xyBFoQdԁtv/*_<=B r G(R⣒XQuadx=GsA1 98 iRX &4a-N"/K#m1X(0IY Á TpԲFep$~ &:@Se4Ro 1]I`{-+aKB4D1<&}8hʨiv5v~V)HKET1'$ҧ5xPIAt@2MtKwI)ᝍqxvo[~Q -S jWDp}^:tjcl˕]-XKT+c-l%س4vQ')IB]Npm&*`QB 2pu.Vnw9PV1+.9JVa hUhV*mb CX 6*E5-PFVf H &p@<@vT=oo u~,1S9PJ $+ݩFƀfVoI4bIT%{0P&i>7g(0<Av Wo`8vn{xN= #@.(a.\7umoy](9"O c4~#R1)%M 8K]ɥKq㰺{H=\" 4+11 AB>N11b8 ɀ"Or@KV/jF FsMijbDP[po?o#˂NPB E-kT 㕾._SH6;XĜSxt|˼]28 Sz0CBR)p<|<,ҬyTUPHٛ>ɢLѺppaCygcE DAxRpiXߣ[J?ZA,1JVe.D(q"Ԑ]28bo=w=Q^JyQ>ั*5*3]z>9) iQ]o#7Wea`qf|M0֍FRԒ=3Wn=[v,yA4r7U$U䯊Ez;'*8Ѩpghmц؁JFOw5 c0zr7NtKx95՝F7ͳ00-}<FHMf؜QB RiLGR::̈m!ksvZ02OD#Ja1J`0RXGF ѓmS l5O9DQ ' Cd6?j7KQ!-y;(nr{ z *a`uɨ02B+"*7N1uJz8K*cgtugB)'^!cNX0E2m]hzV^IrN&`JVf 4D)l;8=:&K&I_TL٠Dƣ~CPya`ka55*Gd1`cұ<U32)@1ztd :fK a0K2 3de yuAG`پӇc7Lc'k⦙OS<`>,`Ņ`|hl+1p@S\45sw?2 o'/HB?ϒa(|̇YeRӰ9䗹Ϳ@fyY0f4|]`u A~IbT[|lbûǒx1y̌G "2e[ҿ]ΦQV/f` |7[ԴoxuТaAV݊!̵,RjRW'=rd#oW{J7¦l+/4g*2sϮP*53ˆ^St@H>HrD b Fр%UUsV/8Ɨ-Hfc8wX71zZ{ -Y= Q8_W> vn(˒_pC1=HfE5Q`wybօZu٢? {3-ɯwN5rk98z|x~|>>Sj݆;y-Xhm 4wFm!qJwPiC1.U~½~wXο)?`MPW+WUT?M(t,BJ-_QAd<aܞ8 xiF ςT@ݙnP:2(:yL{X(Cz8|jDʒ'p>7ט+-3vL`'B0`78 #/TFTrſ˴Y:(#&#?#ڍ2OqnpDo5u\Z{?k=.ow&I.kxY.a]gGdhb)]FUk=ޱ;6߀cR=:zNJ AHRf6X,zg՘IDk1f45[!-9)K6_r<}Nu^k:qpSNPAh5W~?,CjLj}t7'= ZV7`uB9*f.&_^EFm(AtV>L9Ο *s'8,Bt'duᧉb (o<,ɍZOoMd~2O zVğ \h/w6uxX4_}zK/M˜ ^z/x+^R r31e2<܎CWLe3܇ O^ׂL`RGQJmƌNbt0&{y^AZ9ՓIu b1F: lp6HFZ{1`%b1V٣-Ţa쥥yW< ,,~a`wp݅Sm1d>a0Xc?x ʥeWӌfAϋ2}96;0<7?YmL>,vY,}4xj%_g-Ob@3m*qU\6Se٭ n[E<Y,f!Ynu '1beP=(zS`]<jKL#k.PHa`@1gz4‚SP(X *0E8koOUr743϶ޟ ض=ۏ( wq0ȫQtRP@p&JY2Dga;o(h$H0L=fhkGELtʫk9WvIt+/fw̬w]; -, Qc.pYiI(J1,,VJ\r\]^A_~gfJF(bR>RL@SNq飶ěU`̙[Yzj][1E]oRaY9w/ df4<9*Y:x4/P!`V'dґ0хTm;a?Ұȅ4*TXL1(hL> 'H XkA ၱ>r M+WF L1BIw۟V7L=$ɶ}g}@`q[%^sc3jV6 Lg 6GCa BqkLJBe{iKoDQ{0rHRIpxaR[I 9`E N|\ ^9 \E~M'1pXI*fv~q~E7 k-fxu8p(9Z^t!NCJM$zYRdVR y cu+¥&LI5FҎY | Kȳu4 x AIS7)snš'F0PåV}:ll=sNx'qu6)YLI~AӸQ6ަ'bUl~c Oa]ɷ쬻uK mMrU wt,w;0,nx'Ņ:.fc.כ,Yydc:ghpP%J7bi1g;JaظVNidWA7Y&(]$%DBZPT/-J4 tj|_ӟ}zY^3hfbbXֽ++V'Vj߰N0Ոtիe'-O5%mP.[1q@!kX/Y:Fc~wD/ :~#.cҧIґCGa4|yynGЬB U>[YP/99 C[J{-h+&Jnu%U@QX{L'ǤA{'ֆq_7ǧoכ|'q^gi|iE7ru0@ P9_iϋ:Jiʟ/IF:<09-{f}o%QO+LoJ&Z)ALrΐ 6kAKfG'MqHKK+ztπ ,F\u &cet[8cgmYJEe"MN0ŵ&n#p`1T)b+t@Hm7g)XsQT l:w $nz̉]et3yᥴj`}qeK/8y|ACu᚟~9t[Y2d:ft6M 趜r@gg޻rح\x;o`e)`ǡ6qv洸ktuvzSNŨ\C.KUOC5;ENMO=icn!-+sm:S ޱmѸ;J|&so80i$&zҋe՟mV@3ƭC25$aGI2R+3O$AGguJW )]?S(džRu$ -œy KτQxE0*Hd, isZL>i <s[3,۞,G%% $Z9C% _iwmi$Yevڮ%VÀãP;atH\# r-, *p1$> li:N~ GlUeg2k CTTJGX;ܕԀ6tu}_"(08prBhTHqQhvXd1wslK.ʴ) N _&N{dfGIj4 ߒV3S>MzQyU%飞0 ?@ၱ8FclPf`@`b˝S{uth9T%%ԇ ;%a7Сpa }rFSq7Th"L:,4̦ \1Ⰲ=SL:񘲓5EG(5-#  VxH@/Um8 @W .k rN!t;7 #R N/Tj4v D,E F7ܱ9km|=y{{t*Zi+7hó@14`ɬ60z45 >øÇ*)!Er̹ k '|PåC !VJ%88h&}k4~FQKEK"2]SB6x&=jށT>WIڥQ2ĩF2*23rƊ)"[]@i9-٘`=NcP3`g*/sFZp;z_nN㚎af7]8Lr0ylΌ91Oy L􎀠3 }2S/7PSDHMEt/e\"^RׁQr2ToEǸ·v`23tnSnH8+ꕢJ)zX/^fJ/A2Up;oB!jjg>c)8+j$WQrw5vuճ-`oϳͪ<_y~*|ٖ5-$| /ԄO)iO AX@rdv0E ΁*V[l5Rs) TbPs-"T*13YoB!2ux^OӉKrf s^ʍD3h.롴s<؉RX*ɗ֢;,f2]S9,h[CN("$l|bU;% K4 hYW׷+gxy6=H*2h7/U/O;Ѣrp=5y>Y`B<UX ,Z D佖MFSUͫ+|__/^r"PEka1h(jxTUG|0,,NSӔZXH.UBlz[e;hé\=çhr쑞"e -Y>?Wq]! d>ZAPG `+8>jKj*Ffd`̙N][kAQ&[*g+㡇mf}'G=*PG JҾ@tY$"R:"=AR g T*h&ι kC{@ Ҙ.cw10BB2PPg($^2N Ij$f.Euei="w,\U[VeLN7Q2; ‷YNӳwz );HdD9 0˙Laa_ ){~١v>s+(z}G^;ecB)7j<[֑aYqGkk%#RHD#B{ϦuWG:n-Q*[U|>ViF|JbuXo_ORt{Wױ؃~`kُS 4b\sΕ!׼*{\.+Gd|gYm3őA1+D˫^]$J#j'ڵ< @dGҶ=.aza /'s)r\NC0kM8\:po n hy.<0 s|8YjxЬ(sf0tu2s$/4Q- 4o8{? ;15jRcsaFؙwUu" ׾}WqvQ@?q8x;rt}>gž-/?w+OO:bFvݰdb0=_3 [TJ5М- vrrIQh>\ nDy(nGf%J;W)^bw2`CεF86tё sD/=FT"Ab Y&>Ry7 (WIlB OlQc( Rk 2zqyhg"g`eSQkY܀ݩ8-pYZDZ9Z"b鞠U6e5V)LEǤ"9̰4N+ !$G nW>_~+W(zL6`uc/7˔x:S*aS2yn}5crV?y_=x{3q"! |q<^+K.ъ7* ť)K;ŏ n#] CڇfU> }(& N f˻1GW٦1:*AGm&6j\8CQt9ϓK]ߔ{,&Ń],uNSeםqFח?޿K??޾8D .y1-Ⱦ`f:m$h7 G߃CC񚡩brf\[}S#(}?Jo/%ԝ|T]>GlUeg2kҞTTJU*IrWBhOSڼUז--AQ'qācS-8 Ga}iI*z[:Ƕt"La6`E-#Of:9i م};;;mivGd; 64< r^->G;Fhq,#to)FhJJej &3,`Fl$}~s}sc̱|6b;kHE>Y)E 0k?0Ŀ<ss|[cR;ĬW逰 9HysQ ]oGW ^ٛHk'"߯zfH)8ҌdI[6g؏z @rXmyis5ߞlI) hRGd8{X}܇ fNzit\wqHck[..P~ϣ_dR*l1e:6tx#dA~'gfG[??MtB䌂J*gRbr叫#t =ܲÀ:}1`{cnră+c"vjiVA;mF>A{,ZڐGj&`CiU._sʶ`+VbnbBk WanZ|00iA%)|:dJՄgª6>H''zIywԫwV p/pq˃έ68(=^(&qu!5M_VʵkGԵYv+>tfݼN˺8sCWDwJ8R:j5Gs \KjފKDhX:D"%2sG׎:qGoJpcQ$U";INR%z%:w0hq'|(@89@"Byj,J 2 U5wtwt|Z(q0 -om|M8O>'\qrƅˁLDWo=[%iٷWe_ͭF'P{:U6jY ^kwyu;s>7ӪW̐'6h5Z,8V&PfTetۨQ1pAPE At[u#gd2Mw-~xw $vDKI"$"L[jwLzs2ghs~P'DO'*LH)?ߞsoz x$ ) -oN&30{-#cmFSћS6kYf0䂜_Q&-tyw}(KYuvm&ZMW}ӓyfUb+IwC+&f?! `K1yvPUhB;i[o:^iyr%-޼)߷ypڅp/ej|GÕ{qEj"f}MX5_6|9zz0;Ќݏ]/Asqgrl@>$4<0smCnT\_B;9IQEae _ G ." ,锿Dwo#P35Fe@ eg AY3} dckLZl0]JZi{Y6Ȯ |I-z1٣TB̍`LNٮIgg`&<5^6:$LjT@}vofK)bγ[A65돃fUL_Mݕߞ6mڇOMμ +RJ zy..슉[ûm׏3.=O0Ev wTj;miP7 ݦBm?S^685ء{J߳ n<D, Y>, ^ucᢍi;1$1 '}c+9 %R+4YQR:sguL=7{}OtͼZs ǚOù]t6Y.^]V`@8Q~}wZ7+}UKҦ cTmؓӽތA Ee;W؎! Zֶ<8B5sFmxs9v?O&),AV~~0~ O,4 ^ stT׉{sqF{. &uD9hkcRaL }lsA̿xM\i-7Z=ޫU5xtLTԶ&ŝB"nvDN@wpnŖM} D @81* B8t"t;YS XӦ k8%pr܇q.XoR.oU(GW|8]-e8!N>?Cc`Za2tDq˷D 1Ng*7gqR~y0htA o%0O^'re$C)%Q[DԁR -;2'#$QDu)?HK2  _TF@x"eUJuG SAPXsԁEui\^/MOxZ*Y3%D?IHa|]Hv&SuI!Նh$f" u0Ї 7GS_#6h/pV2ac$eFEn Yaa5.@5$:ƍP(pd=%+u"`u61(N6>)$R %yt2H ZԒ@TpfASc&6vz tFV] ,c,9(X竊)Q͹, + G ^{dNjreCo:n:LZI#ލ\]o3) ?2r Q"Y1`dGH CWAj' y̘(mx-}nJQ*i 7DN r+e5sQJN`et4D(^zKXH.xHTϰLjq)@¨C+ H!bi*]ЕT0`l,>F5Risy[Rbwf]^`P~`:1QХx@[S)qܬۻ&jzŎTòlrQ :Q8,ZK'˜h@XkTJh9$y܍.Yzp ב 1,NxP,-ĀU' L wSy, pΧa چ=g"Uy4I KI:fRTJƹ4KD{*?OteZ!~$Gێ1.qMp!;E;Ɔ``^];+-?ؽpZ2$dsͱf/1faf*QnGp-"{*"Asڊ4Z}r1Y>΍ i/h9xJ^Q U7G{m0E[yENIF"rx?d;4Y80t^=[VÀãP;atHu F@;n$> yU^:?O>38|..6} ٹ)ZE%^Z4Wiq"n ]|hᴤ,+ֶƿ\̋rǧf$(3zº'Ʒt iy }eC4<u0+]x V}u -!+9~OuU[.-k!@~b_gn_OS̲pe3k9g~6n(FQ/;+c "!vOGN aqX6b THqQhN;EUnܷPy 8>[`=21]eGIj4 LpVi{mZOfz9yGO'tK5Ȱ.3U-Ӫ>gy+Vj[ 7ޘtJD<$[e=0r}@G,K03v y]~fXtCVRJnJ-4Y>^.UILQ>1@CYUC LFLIl"ȡ{ΒZ5P20ϱ(uqܧo盡cٳLBZjSWle)?ڷއ\y|k'RibHN]*NU$xѯE}[~I:Z7.whl\n#4J<5Dx\J8scc IJMT%3⩣Fgo OJ7 |3|m7ʹۻ4-ڏ2鄔oMto;Ww&Xi-Ex0C?chgC;*nEҼ Y<ϞyBM럖j&eTpEIĩv!Dd0\0(3Y߅POȨX{q9}/]vK'=ȝCk 46]^/)l`sfzw8veG{6e-Zv~O{r=oŻ6<>6~JK t/9W9xNK\e )=^CRVhJfzh;ջ TJq H)TKF)!@)vgx!eR x~`S'NZe##d3e:ʎĎ!@|>1RPVOMʊ>tg<>̺8[[1;Sbw]eFN%be7铳2Ʒ>9˜#ݳ\DHu"^NryT0,W# 5e-iWyphPPdpDv<Ʈ|K;U;*rIJC Y0hi:$Nb+zcƠSbPTqK6! `VCEמZ?xya27<2m[myN&OO̯ۗ$k0G1x% -m& x%րr=*띖T MEgJL!Xd|c1$օ$l#"W7>lTUƌ(HP $"(G:p Tx(Mˊc"vPL쾊-+Kpu[Ot`:SD;x=+pďq;S69X(JG;_Y\"οU/Ap69/%qZj9",8\d$+CH: >?XAZ,lSPLOC46( qP@ 1j8h$ Y=*0Z8p8K `??M{Iᛗm02KA#:r#ျaSZ*'bj "%(%!E}s0*Kn=R0ۏ[ F|&8FvQ&< _P|Y,C uX-`l) r%b iH &(Xg(.k_UgM>]nb 1Dp)T3*1oDpHp2'YPP1vAEϸ*;NpG?r7ϸ BwVܱTϴHq*hi=Q@H1k5;5M3dmTjH˟ 埫֮BF+6Zg{XLfWVg{Vu&e(AO`Y}:y~?wotB$/iOPuEPD*R#<14ɜD]YMu#]ײ$it\`%(B^%0b;D锨Ah)TUk}Y6>ig'8g_0Bi[v{xJC eLW Bv'_-]Ie5i@``O*s f EO;So|& S:mg.rv5?ҍJ*D*&Pm"0p51@CN jgę!JGsd 3Õ"?(8G㡦o3}kdm[YYy>`nD?3*(N w>1&:[^wL&*v+ԧ\ji(7 yk\tcF_rmlוku7[/v.w rg=Ǒл{fӱm x=3GR/ؼmոyH.;6a?=攘vn&z.ľ;?ӶS*8<줲~>4=YoT7PnEx-qR`o6 >^š-w;:⋷9uc&q?SPZ*_%U V*_%#UJW*_%v\*_gUJWʴ4uU*_%UFLquojlS ;nMm:j(lD4qVƆ;pѤ IU]r*+zpW nB\\Tod_˰ =V{v)v Osr>1LL451'(w[ "RHE^OmrpkS*& ssIjgf$;/T!D,W=1(G$ypY FT(FrT!(Q&ߒh> Gs: \kA gF8D,^rgٯ.\A`n 87m D샇0'L_fOCKIOVZh/{v9*]k=p5W#b[rU.w\iH!$HWZ)U!zcƠSrR)l×+>?yyͻ7nov{i9y{_&4v2y{ؾ5`^L$pwq+TWO'@BnOILt*]b :P ! y'Xﴤ 6:S*{"EB_  I⑓G>lUUƌ(HP $"(G:p TdRc[iiiDU#evKhTuc4?r.8s%-`d |X""œ-´R@f[Y!F #7Д\xcͬ"ˤ.HIe:LI&p)SZ+$gL( P2k]S.zzɥsi=76t$J")!gSG8o OeSI\bnpЇ02m9^Or^gh4ߏR]G1%j5R Mcx(EB#4v26TFӝ*dS]=׫mT VnGy-л\.},Drjonϗ{ۧ}\Q _%fe-准 CGVW\~܂ v_+BQ Pݾ ?UZr%wKyk\tcF>rmlוnnMˎ-0w?{Wnh^qF\11-Z#wG;FԻOfv6{҅<zm}#ݱHոy/;6a?=欔vn&z~&lj}A w`ŧm,UZqyRS~>4=R_oT7PnEx-qR`o򲵼W&  d4CYs.N3zΕ$cE᏶y8%&||y}bq]eiůU`)\5LX<~aqv\@⹦j%P.ꍓ4 V /}YBѿ8#&/\;b]] 8 50_| 9|[QF. Id2_ $M%dd)@E8BU:qDk.Qs1RH?& g5sUL@'!Aņgnk=c`Zz|?UN]q<# .$%siXD(}ia, Ɯ3PH@xH"i#7rH#a:*.3@ "YwL=C(}IFϲ\ݓ[ҟguEEVR;8 0|yO{6$٢>#{흇iE$e[:(b%(r !V1322l%MfdIc`ָRbԸq̳?ʟWw."}/wSm"v)6)<F"2Lå] 0yW  ڊN[lBZspTa# Fk@},"㳏xX.3×"Vwt)DI%jB-.GJAtoڼ֣oZӈP\,|]q>ޘR0yZQvVŒkGyɪq |hxv^b̮ƿ]->'fD(_3fºq&7t4 iiVI=I! IOƣ9Ŷ18+Ag4kZ-BYp!&@>\j+f)"~8wXT- `q8s9㻓?K7D}C;tKZ0oW_ߌܥ<ij8IN7:a2_/ o9ԭ v-@.}!BXMqw4-z#ݶ%>[QR`8v邉kz4#lwXDqa-bDhcGjs:<*G&4*pZ1mFKZOyFi{mZ;ZdSVĨ-U"@uZc-(%+N0FH:O^ &^A.3j[{n0|d/.dzfѨ{艬#|h8]$fx Kn"'Jќdsk,QbC$zDwo!r 8M(rnבaYqGkJ_k%!RHD#ܾNhxv{r.j[nsFD6fzOd)΁y>36UB& [L.9;Jק),4% fs+e5sQJɃ3k6!E+WNC*Ljd<)VhʈhA #(|KeLDagLaYlDAsoY,vw6nZGؽ7/)0 N $JJ\ ,uI) :# =RS-떃giG,a9f~*UQ!Xx[\3ruT:a-@qq};V]6(zԦQ0GoU(aM;wd] qM,@38Ŝ9sE‰wʠ]3mEǸ8[6lj?hC1!Pk wʾB[^i[ջ;{E -qV/MOt3aQ4R(fPgDbּWG(/K-T}~В^.y A99/z50 ?*X\Gdo߫ʞdbagtHl4E5tƄ_d~?._ЁNxُ\_l:#YQ8[0v]5fW+V֌ڊOagD\m&⒔ۘ|2XITݤ| x=j5[hKlSo+]'hqgCpF0ٮETφef&"X/?n`a畵՚+>7]nG),;+ߦW2\<,rg=[=X7\S3KAt\Wka%]Ʈ:puE(>h VusF!6iiA!+s ݦCJ;W+B)ڒ +6H}rSx]]e+n3_A/C%EĈD%Tˮf_ݯ4{DW"H4 ڼkv}T8WQ~܌|]P#4]> ;!?t9_ 8l)gJ}'ӫCN_M.+F,j y_ߗI[klT2cYΤӇ]SG";jb?a4vQ+R3EyXiw17̱ -5q+Q6yµ:u%wk|?vZ8e ^%`'kqHwc)t7 х!A 5 gB*0F޶oܒѵfdTx^驹(is -#g"ygL3|\:` c\ BK^hQ[`[pjxwxNSD-1Lh161P@ku[O tx- B-ny񖟩>#22:w>i)EMJesrϧq܁ $b\sΕ!낎%i{J%!M?nTYe=|,`mSz3:wH]<;̝?29Km`-xcwfRjicea3{sͳ9zxd Vwv-gҚMU/)nɵAZhż 혷1eK2'82-Cܛv5Vu 뤷*:4ώ+6E7fE1oGb_>]v8.N̆7N HG,.aHH2j8 eU:T̩Si$l ']bB>3ަnkBZM vf8I]uߡeճN_R>R<վavt ^P 4gp[Q. D=;`":@u*nnN)oͷCVeLpmc O*KF9 @ Ej(F,RI;5r&L#Z]* RʃT5_X/5^#:6;enΟmԥ|l]'H.ZKLښ6UkwHJCö\]٦'TdѢ&&˜)'ZE%$+}):Zd;xL|܍R5kf4&y >wD@4:&(fEwZI%pD?LAyn'Yw_~,*U& 0'cVڪ% =:18K)Nb*M J2R)mp,5I~=v=?jjH* 2r QFcƀ !  ƚ!N5fL:f}œBHAQGRnYtqIG|{=Ǝ@0@]$tf, &vi1j̅ 0U|l!?|uɗ԰bb%ُY.KSOEݠKE/wSm"v)jdE@ O! hp&bn`piG׃ t^=UÀP<+atH\D"_Bc}Lte ,Nub|U $E.u65(}]VW68>Z\NV?ʕ ^QArK0a=1˅§+B)Uw٭Uog)]mo[+~˘(.nX4~*[-9~i#ɲ-DKhkKs3g!~lMc1'hZ޽{VUϮ,M|=N]{?h7I7iM'^#7/ݱ50Q:Ϧǟ}~ΣmVɷur[7b]z^7B gg<ϵN]?d>bUw[Z҉7i$'u˯_}~/_ҿ=÷_Y/er&cp}ڷz5ojD=1-!oxm1pe>]@{3kg@J6Ӌi<<ϋnn:]͙+ep&i'l2٣IZ2\8yժx_A| ش.]:~זb X1H׹ėQ|R.Ԇ2;)^*Ak(Ea"5[ wL{aXg>>UG[%"$(+ ׀)KJFɇښg4, "Z&;*N|UNNӜ/vy䗄.w70]s}0Z0}uMc]--czu $]ht?HfBDC!sS rT:o " >|aMʭg'RڦAys:'}\~Ӵ=59hGS I kQ8`R#u:,ipYdx=Ts4ݞކߩ6\c"5v^_HkRӳuP q) +؆w\?;EȎ ?X9FvrvOt}>&Fstѹ<" D/w/;P,'x:W~c#S 1fU zF}C@C0o03mldddETR{Dί\M[8r gjKm̥g>9F!* wFC R`uMm;+2OdעjZy?-YC7bbxyfTq9crPh,w17g*rEl &E}J m-80%`3aP`V,BQQ8'V!U [Zl6(tX~ݟ+l)-q~aV%F~[>>}D^L!LQ2 !&iNQa q9#G9ƺee%;Eun zSd^b\1YmcpAݔw1gKoSТ^!ԫmDP% . 내yg挌{$\}=o/0g5;<ٗ:_چ$kߏ8P/Փ쬓Y |=Nj^t2eyv`ڴ{m;J+?~Ϩo,m([f :47oUۺ䫇[6_H^yr>oŷez?B)wy҆c| -.Us؋7[׺}ͧp#vCDnabsùoq߲.}8NZl0}ɲwum}v^j֭YW|tr~I^7uuuv7{oݔ*%6|ުpYUCrK9IpSs)p[5h4Z}zk*l Yf^ФT|܈̽ Vh  N7vM: Q ylL(AJ{j] kkd9I93 p:;>i9L׽\|*DEEJ#鬄R* S!k+U2;7,|<&K߹HaL(^0em4Y*4S){+ utWYPoVc!lc"6Zv%५Bu?м27I8 `V~]=rl#ir+Klt dvs%.M.1nocJ_-g~`kn6kֲŠn)F *c\_0g/T'+x;ړ=I-pܨ\M?7ur6%ޗho aRQމgJhg7KhڼNqBKޑ~"bZ(L#X WcIQV?*ˢ^_5xuvۗb1ǏgX)vO;~q9ѫ_fs0H!r6%Y'd6y8/8WwWm;=|w4eKػ l$-0o;h^P쬦V#g:9ptzJͩi |O.Ss+M?ts{o\?Do'\\M{ N/ӤϮlkB\J&6yEyvyljCU[K%(PL4(X09`" QIea%|(TlvUܻޯP1~wd~vO,Fv2.ni۞k !EUSrBqNw?W?5pήLTP)VDXR IшkJH8{^k<`VA >;4lߛ<[4&j!x}J[Ƒ:ƹȵ2_tfu?\RFq*Hi)ՅCʫKh1PLYȠ, $I6ˋ~I\k#vx5,8^ ##$OIE*&!T Հ# x[/J񌂐ߢ 1 * -y^ɱM%,rRK*װX1Hh5D3h9iܴGMGj% ! kx$(i QP%89qlSimcF/fփb:Ȱ@a"YKZRNGiْWiVGRZ yTjvjHhyf"+fjzJ5Ӏ͏ͳ#c19ôLu9sxJʹ{kH2}q Huj$pQ  -HDmxHiKfSŠCGHh^8A#irfDd: ҄CI!͊1ZUBX=GH^P GHh6_ 6&rMɜ#$]mo9+.Kfg9 ںȒG3;bKKEY3l]Ob *m AۦǕ:$xDˑ&49X6lBCUxqko䨫/y&QN< GʶijћBO91&XfyE6!֬rBopoB Q)RÈxG@M )҃@%f"(Ѹ,*Eh0M EF]F-mJ6`~J )YF74l!G=ZH]6 ~J ^PUI\7`ZqۆF/#9n ^CimU($ ʨQp$u7|mZ(0XмZ缐 A;hܸ r@۳*P lFD:Tbn@k?[m~tq7nNWpW7]Mc=縑a?Ǎux}en߽ݾyyzvJͰ}g(6p{Eb/D;MM!q[ "RHKH1kgq֤ܾ5%'rR'VJLSLRY@BFLr"FwfU2 'h՟ `1QHG4G.iW&Q8K~pɵFfdpD՟ug3D8 ^j۪;C<"'bCCؽbjBGϺ_c8__]dNV?_wPK;MV‡69#3,D#/+ )2EC┡V^{*cƠSr0-2(fޗX_R^ѵ%=c^7wiv\whr={g4v01OU:kD.dFsS"3r{Mbbx%tꁺʾ"!띖T Ŕ $ќ+ Z!^.$GN{E3@ ARpj2\p^,-PAp1WUUi/a{2m*jU",׺\<@5ۼhMm^Ln,J[bP b5b?`w~c`3 LB-V; Tz)j6c\k\VH$$pHFbP<)>,s~zq`d+cMVo?Fkt18?qrSXMoӓ+=!w=P&dxt=F/?-w kDp_ɬw{Ǔ'>E6;i"!A^dl 2$7\A Gx XbNr[-|Naےk7'9ZD'I$pUe 2/o - CdFGdrD)D( : ҔN8\)I$gR xLke2'4!rXzi-d" Rr)P)4)Aj+"7sh!8?gmA.p]XNMfElV;-N#VBޝ&:b{Tf 4QY%{c86#L_SR# UYyJw%4c&r{m>) Ct^:CPJ8Tu*X'pTQI͞a|;9iyo:73WKj풀Mi u?ǒ~+1ɣ+=Gpdt{9ͺO ׷rDX@"$YBQI#ђKDTFⲹcآhg`Z{S) HW VQ ;O~-ryHmhpb"vp5ɁA_(}oZ.lpGq'F߮%{8ҀaVܽ#>tq{h+ZOYm̟{b3~Bur{<14#)OA );G R&)lVUR#V2[@fF\dփ߀$ Bv1T|zlBb$G#Ͼǜf`b /f{Ed`t\|2Zh&h08 u hs u)@. V(;"qkϰx>uw2p7rivFv-T7)I9WlodBߞ q~f< JY6JTȝ`iЦrjΡ16p%AtmAZ֋#Sp\/!t,UTSpAb"&"ΘVh5Wǒﲘlܿei몣c)X5Ծd2^* 7 +ht7"L' i #0_ ~+џؿ=cvY"]m8 mjXG=cqiX풐J&JϐqATAFQ#Z eKq<-P#eT8q;mpVcw9QȤ\]CS0$Xw ^ќF]'}I7>vx7~EL#_xt|Ynhp]6M8Z2L,Z2r4E"ʗk0 oR|&t4U Td'/@3h*HJQqBm6sc+P9Kb_z0Cu)Q4Em8u(~*Ò:(ΩBJVKLz諴?Ҵ t`HRKSdS)q1JrB$|3,\P @zS)0gNM/f}N՗Pf1 ˹(ä).7rKV}.rО€}F{.;O,N{?rqLqQ^0gP 3{eq>%D`hi)'s9=`\{́" ;$g&Od/t2E5xq[Vl%ole$x:>]A`kzfg_4wRy9;lg{G{5^.mJ泄3nӼV}{p~ъi+0x~0q8~ky1rCZwkD{dzlujz>a=tٙ grs3+"W6ms;\N  BDM盄|M]aݯ5 ap(cB%lr\1IOURIMNmTJDJ6|5yM 9FÑVQo)pE1Z[Cb !Wd3 „H5w42n9b2*Nјe{^v9$];1Lô&I/I(uG }{6)͜ { t|9HbLIsMb5Р"52gZ{㸕_9Q$npAwe+,Kv=+YjI)ӭb=!Y,CcDޚvH:3ٹ%IΪ|FOsdI2kQŪfS d[N1ؒ)7%P$N]l3Z m6Hqhۺw)v~PQezwz6_6ۻP7Џcã˾mAK4SS>< ƁN[x]R R1; R][?]}84͑E^*vV+ zBPﵣ[?0_g|S8ف{oKw"_$Ӗڊ9WUYRboZ5\?EX9Rr5s[,rATJ%jSa75?~fq>W2C"+d/s n3۸MC'rYu< 7B*ΫSOqklw 7FtS(.F4]K$1jIF%Tuj *:c\VT-+كggoY_k/ֿ6}H-ba,)*gz ̑5PgqWT[IneL$0/9fӛDN 5|B v͔S6œ)PNhVjZ={k-t?PRHVm*tVV4D+%3Ĺ#scWPb8qQN:;{!alKs ׮;ZE/ލ;:x: Gg3-0tg{B*jK:vwY{ǧC [xw;4wƻglzSΝ'_~ˍ-G[rx؅in^s])nuswrӅ?w@կ0?7΁?29NIi׳~qO PCϩ{rv&RxNJ~:]o?;[~x>a"|zјG#Ğw?Eԇk yQkSZa x0[?yQ|+&&DѺOpu% [OsiCkWŘS5ѐI> dz}_hy1> i#G؇Ii/m ye7.jR0qMwzfo{zR  ;qAEkJUXkےq+qo/f*]n8`H]כƏaٹ7>n`3P{e F^s>:eޛ'_r`{O.D4멟yw1C%.u#p Lq{|7F$j +cAgrǿ4-tߕ3T/Ʉf_|z/w"ӳ3{+]RLcɽz;|3[-3읽S+!\sm><]Wځp^mLq3RvƖzItPT]Hybg{eLA@_|q-{`c2JTt{ zc, k2֡RP!(x]^y3\1*g}~ B㝽 o̓YC aˆ*={9yWώ$믬8dm": X+O1!%d Rw"UYUS#od-8TE*ɢ F[5Y5ZIҙ{wFG3α7zҙݜe'5/feR% QMDc&͆V\B VIa 0f dGkA[(٥$ݐ#"L%m˄@4!ZE;%,;)BBnQxi v+(*V :]-B X ~7|Ԁӈ];kìf@$jq@yU/]2FX \+=lc.u;h/f9 HT*%BL0$%\D/S@ʀvoa5Kpc%d(ձcY 0x gCI@SRmݬ ÈV FCKy ' 'e#K0qmtf 3EHDi9 <@ 3;#"LGo09D>&vH526ЉBj%;OzϐΪYvL#*55,K޺9PR*v}魨"XDmRX@H HX2 j[=KP`/ R B;iZ؀qyBnz}>|_ ˴G"L~*7@7ל;FesfJ`$CŨ]Z%*iY@E#e5 hvp31f́@gӣ"·f&7ss8npJxsrXW$l>t#CkٹvVDRfd*DC:L Y!͆Pz }v qAV$5%4'WW$#w?/~V3`X00BaLE#BHKd$fj1t PuNa-\SҡJ&8FU{[r6jmfxnR4 ǚ5ĪUi H>gV%[21W0\l='m5fVhx7WCk+r' >FU`(otalp2`& zbO0%7aBDГ,ssͶ ڬI"VFt)." 9edǤ¬t&匛HbeN P\oH҈5!λYt= > "+eoq0@5[F^tttv|A D^(F(a,W+/}I4vEՇUbL۩0oMW]j{]iBocy?O 䅘".w*}4%1~%@J q%n|z wZOq0!ue 15e7Yf:KI5=Y`ga :=3LG@7 ߹%gB0Øh5@VrD4,h*u,4H0~t2LaZp~okjs,H+~U2\JImN$Fc}~u L~w|ëp~W)(%F#gED)gI2j@(bc&GZE\N;AŔQcRX`3˨t'*0܂Ylo/p?_B׫jRwu?B6Ԏa{>*mcʅ+cvS7u>Swݘ/g'.=lIa![Td(EMj yzɄMAgt4E W>1MޭQݛځڳI9C0jRNq')UBpl8xmoFy)C0LrتܡYv=IwW */> .1C;."DP`D"Hm~'Z Q]59ü9>tnkyͅMtuS{白q|VbcIiz߇Uqϫۋ.;bW x?O N<΍p2']gY'@fÔ`0=ⓋL!̟8^ד3hzo8Zq1|؆P+׊˱t.n dَb Dk痁 1tl?WVCTKog7mN'WKHKt]Z_1n0efޱQx s%L!q~ңӝ[-^zr9p ΅_^\֚ef٥"0דK۟rDL㛸X0eSynbPQ<3E^u':Wg}+ś:d[ au(<0F,SrB:FܨV`ԍu/7?߼~oߟ7?YoI014`"bn_j֫}W3ͷcf 彼.7pi|fw_c?_W^2i1g)13l?럀Wtr?mUr ]HB5h@MU5׭1Y16HXbMTmJaSIkRr Od@M&V<%t J0P>=U`$TuDӮ ٘XEW=(o4aZeK)m0>Jڋ>ǧk5p6Ur4 {& 9h3!y:͹MK@scvL1sc}΅H6AKwNi!?gkJ}rh= \kq]HVajXo/{]T*a| }m>K}"^lwv'+NSg{*N!vtjuGa +1 {+Zgaw?ߗoO. lnPKPP i4Z᧝6~0HQN^EF}t-@g0f#F83s-%!`cJŽpJI,my9ğN{S靮7GlphkbW"57o"F\Y s#[ç7}C L?fPā9-R;O`(嚕!Iq TC~1%p@CLG%RJDo0zZ.삣BvYuNC+]~f >&' '&qXyL % v,lLg"$ay:fzpq3`0IMy{G_dSʆ,7%;_ܸףC/En%6?PlcZ۽7/ ]E6 q1i!T.K2Vk9Em\@bн8 mǁY r2UUd21v!8q磂cn5>}0΁6Uz`Ly-|d;#Q|/%nebXс@><:'|@/qx/.ȃSro$Z3-vYGg|:@AiD˚n'@fBʘNB<ĉLQbASGԑUxR,C<&xؙ*(ꃦ 6\oD@q#cq Xd±?P mKh?=vxB;&'4dEIGX ,QR2\8kmE"қ}f 8uU)!bN5|Hmp[lGZA>´W B 6QÂ<2Dq\ cq"Yom6ułk/2c5hk^rjbPYXX1Ȝ m9VT}q//Ҝ63Ej{s~A.n_#,L7nnd =!"}**X{))@JnPb΂wDðq I>2wZ]n:.6c2h`i`i1Cwz55iyM1M2]]w 3sɲBW&LLk+4Q}ԂJ.NIv6fa`HvM׃N7If:\ܒ D ׃mӓ`cv4An/{W^ D(9<|syN>9*&|͜|syN>96'|&|C4syN>τ 3'L9<'|\N>9<'|sy֢ryN>9<'|syoB텤uj|0>j/(Z0n50\By)#C^3ý菄oSb"" ƌhB[-(SQ62 '<%U: sD/=F D  EJ1ib*8Q@؄ A2`Qc(Rk Bzqyֱ 8[hQQj>Aw2/q6vn}HJCk7Oٸќ8-pY 0&:rʉVQI% @_i_uPl*ג6l k9#R"%1dNF03,*͸Jb,"a*H&6GܮX" yvփ:4nb x['1*M J2R)mp,5]MFF.1h,0#$8X3I#ƌ cBzS5?J0[V6c\@)bcG@a  .vsηwtv5s_.\ J=;|l!|{.9Ou۬˺|uwS}7%Qۡ?q)9,OҡӝĮ盯@WU)Jv*$wP+_Ԁ64nS[1FK>[QR8@vʁ-֠X I8" Jk#,FU.Jqhx^Qu8*G&0*pZ1mF𖔵 <>`E-#;ULUg8*Nc,؞rsEBʭ1LKbVU901La:aj^!lqM Zq61NJz;m^r-" `V8Afx.#"zk[٥u:,*h6 _ 0#Ԃ*Vio#JJu(JJޓl7:pO=vV"}dA(zzܓ챯#_}=rjq3 3 *fP S>[~OiJyN^O~[raAA $ SKO-RGkA>x#B\H9!Pgsr'IǣoWu9s+8^nsgO۪"kcɍU2 EbBL CH8Cd[)C 8p3vCtѫ Ahɻ_|EFkfi[{1Bøc0!ϾiBXNӄ= `_¸`!(@Ђd k,)PJ!tje$zZ]cSX(nBp#0LѬ}B](XDLRBU`N`f.J)yps-F( Q6hd%,ZRw([7soa-_5|aeCڡ{]9_$MUAg8 H:N/YvRDu GA{Z(-;y-Xhm 4wFm6*GT KIw 阂;g3Գ:Fؖ8bkjGoMQklYȲqٽe9Wg9rTNt)S^xA@& 5f1[r歵ܻńzUfj+}s&SuO[_kAz&km..&o}]]T;i]|Mo.Nn{|7 #C2knX8vsn]tTfmwٶc;_$aWܡ4,l6'-_P%L9}1ٖ \b-{%@#rwmI0S?yB-iWp~>bRc6(SX8<]iN\ɐ(%Q[Dԁ@i8@8V`DuN[ٝ ۈdfc>spb;=L}7(]*(nwp܁9,shRφѰ _콪^ș;<}X?JmT'*N/K*/;BV߇ϳPu;|w~I7qz'A}*nG{RcB(kB!00Z?{Ƒ/I;R/9:>F_%)!);:߷z!!)rh +4fkڂ^ rhĮZX99M(ԇy/&c(|^"aƽ0F)f'+ fbV(fBuvO EE,ξ3/tٻ;Y֋_={;2rkf5=%8v/~ȮG/f yMs'&B}ӚWsزǬ@5v3 K D4v,bi]4!gܦޑpb#Vim)7S܂$n*[sS-(GqӐ(h^wתb.VF :@9Q]ca4w(9 mzo]Ӽ|Wnq@+ tt=yuohrլ0Ci4adJg7o:3l`pH޼߄yE3!6XJn M\Fn71s~OHOJ^lݥO;7my\&2#rOLͲRnT>;deUF\aS{j->kH2ʉ XHҐ\ršsTn }i[xsM|1a7;T6nn[G*u/^`zhռ,K|/qLэ ceF*r,%Y*`LJk,"K  .1iC~&M ^`,|>ѧtk[c'B6JOTm<'w rjfiw9;P InO'>tc:G.!81>aVE80% \#bwBf -, Q]`FʩR ˽)t|[Sp3.šҖ;^OlC-Zx۔ \&sj Qcu6,Y Ki?^!aӚblBQ(}*삒-b'8>jK ^Ȍ ^`A ƜJi95v+Ytacq. B“} 1)Q>3z0q͓_6M?ҾqRX -!+.S^Pc KL=U\(Djlhd0 3 j-$M*,!uJ!#a']Iհn\Ek7,&Tt88f༂цS@G5r|`#n4qF:dlUƴ  QHۋd$&A:łKs6̥&dN:I;@(;˟3C6?Sn7T踼 ?\kLޏOOWZrD$Ι:WI."oФOZ$[&Z&Z&m#D%RD3ATj4 VRj4[DG'td[.R[B7zf3)a#mB9qz7[eQU}5pun״!O Un^W jtث]I'i?j"XV 3VMRv`De[V}9_JC`RGmsfȵ^k#ɼ:@MxM\cԩx8KZIM{9Xiˣ<Ws!Oq[&~dũZUHr pv"bx>H͸5D-[Cn[tE !I%"Fj*e;PчP`:-RLp7t #6|4CE<U6"(9E(ꊄWms[ʤGWV*͑q^tx($`R`A<jdHXG"\JlBLª1jv (K)A`A8zyn7jy*t&vpqΠ)p"AB`H10!;kaY~!k/8zaF/Y[)aaPLA R >ҋ\uaG^?|ڂR,=zm##-uZsek~\'w ufV]JeᣉJ>ʐDQ|s0p?QUV~]W3x$|/Ak\*Usg%w1c'̐5A0ɵ r B($aeh՞2#,RFXp4ΉmƉ}q߀ZԢ_{Zde/2\im<ϼW·~,!Fi7diΜɌ*o޾ׯ̨WP4yZfBQ`R¬k>X.&`đ3D<ȈrY b.gV2w. jA$](]lE!oAlN sJ$j`ECрydFb2Xh1G EέRȰ, Ƹ#aR@VD h$MW7E0"n)fR21FJF}Re,9F{˕ln\Թr%jl-WrqBpMo#~PS8h71IQ.9 \ctQD/7m~uRex477i(q9? 1{a]BpXOo}6f\@$?m aR/1",D!nHM-a$EV 1*$Y٧߳5u'f &nM_>.1m^ՖXؿ76=3k_6 LEurM+Kp3"اt'KI QDpC@J,s$#MM;YNM=Tx2'`ݜ Xz]MhE-h5I)-|s>r7=rYߝdnN0;|>\; gN(Pl2y2QT^Ml;i+EԓԦM 4L"mqIgTKhi6i#NmY@}dWt)eV?aF''i^fb&'&a}TP4N*w &;޹t#3h<{ILЬnQx2hSƤdUgOcКR^wAϔDAA,3*DW)UΟ7WueXd3į8BtYx8-zuX01|gCWL_hL \7WT\YO2j4U 4g {f[Q. lOV)2#ƒ3`.:ZɓG.v\*۞+Ldk,zzu .>bweR?_͖ުEu7s|&3\7Q$+y93G :@rV׋-o.Nz?oj4k†znK8"njUD>Pگ~e<rZ; b!+ bL++I.2tb)fb4  w"Eެ.>h}>=vϻTauvc_vLdc:)QaQt\yC"٫mNj됋!<H{(ivM}6:۵cttMDȉpi}2\ZNKKz ڏKҺKV΍l3ݣO/W~_@!*(d SA'omd>xdɕ1 hKQv7g{2 dRIŷRl>K,.^ rǒK77n]-|b)sO׷ړf1)`ֱ TxEF笐R4F$2>@rP.Ix|%|,7$Zor~heϭn{(c|;c *<}sE^b{jrHQ Pni`NFWf2p&HTDb kEW,֢ۄ(q::RAU#=ۍӑJ* 3V>޻:M9d_]?֗Gnm<'tttT:=?::ng>L\ 0;wuD98~@&6j+'ŀ$z>7)X紱QH%7P=KF%c+h|L>`"RZ/CrcSupD-`N88AJd#@ ʓ;grLsiֆ9y*Jw:-8'r28ѵ]P ƻ̠m 17rXCնUR}`;ݺ;g_EEU u 9]5KͪmH/y}qܶ[_Ycv˫W^Sfm㔮>۹Ovm~>Eyᒚu[?O J^H oe_/y>6[(\PZrX_~_;yyd6ӧ=wceϋ}WxPx%D)5"Q6MhubGKVPCS2Uu Fׁ%{\-68{hL2m) %H'RVAu4>jBfD%&]%q k?ʎ[ǝAq•#K^rF)huizt#Vm?mᴵnp{>fW'F~NH8H 6Z?ڇ`Ե y^vc<3SZTJrK.er*I˘!ZjsT;yZ}ַ;U7I~oO-kC7ýVӶv8'w$B[=ӡzmzicp^glǡ.>&Kt*F<;O/pg&^ R)7Ԇm6BM*kK~|73Ncbۺ^˿ kf@l\Vbfp ux+߿|IJ>;?D8<Jaq<TBt`jcZFC r¤rov(ET\V~v S-B:#h.s*ެ]DH;r49 m`>|}rsn[ߜK78%G^_=lsR;]":aEEdh0A,O%لhU3jΕY{GcVQ=%]d*9 ZҀzkn!Ir]mg Cg]&]^]Z~ceyZկ| .~ͯ_FV05+lL1dn,P*3ZrAl/ƶVihe,MCH,{*b5|Mbgݭ;56_5r,Z`G{*%N4l rQɛbbD}\]VIzq6KjIF Pq`5Vߎ6A)B`KLI&b: Ŕ 7M6IsJIK3Ej'qsbuaYb+;Vj[*\/+XiD?9n6Y<$t,e <^t*:եҫ/gUz!oJ^GTCv++|EPϾ U5a(1h)`[IR*.Q6t$jZW' AU%V*kI*E:hER%:RZqc1e͓ U6xTuBGhiA9EK(X9%S]~޳@hBYκugG9[՟@SU> ZdGC $6*ܲKvQIVsTgt<@OZJQF";)Ȳ2HN7D{ (dFN 1ahV';m PI6|!{gD*!F]hLq1<).c^?_%O۞+O떕S,wx;vȶ:p90JZ| PswpRKf~~"߾ УImEYnGv)*(E4ǠV5d9&CLYݼ)j~q FUW:bg?fYɰh2$)@$LqG|LhR6Iǹ!D9D3Pe iT˔1UI`Q`EGTEl1cʢ|hO%O Zo*oBjӫ;ۡhឆSB 6|s|h/}0`@lċcǁF;ļr>Yeυ-z#ƹ{(Ȯ \(CU'M?:yHY&Nwo?)͆M!.d) ޵5K:g$܁vUes6Uٗx WG)mOc8CR&,[h`}~c"Fu Ak$gJZ\vT6J/ aɟU'`)d_׭\!T:t\*dj:JʅmS*e64Y U?P"/t?,+R 7LcU[17!&"CYzt!.g k\Zd D , #KA,OJĘϮPkjcLpg g ˹*D&AN N 9;2)96,hP>~̃30֢A\2kqhHXCH/u"4MyOgZE m^'m"҃єjh2sj${.?*b8AAƬGz\5ѐ i:)*`FXm X!j­veM0 F'ҔtxQr%PP 2*0#VF) gh/\~;B9|3 ,>aR~/#Ҧ3p3l.Dl,J%n/L-LϨl{i)#%?h﬿͍2;0&2}Rr`(D|`?ʦ,Au [ r.ŵ΂ ;;{=(g" [ɃEJ9=&%W%، _ʋbɧT}?6б~^^-\ν-jpe2<>|.>*bF9`nb00~}@Yq"$̡ToYQպ3ioE\ɓ7Wl 1_茘;Ode9WvPC>W7?[^ER7ڑ\?]0~jY* C3 W0b~qУݻxͣN'Y7j\Q-"QG.4N9$,q}0^/. &ONB3Z2;<|vz:h|.3Q+fЕ~WldPwU%U_Jٜ*# x ﯞG|FiK<ۑ7|RLHd;Q_Jk _QT%jp]i- f /%pֱT㥠V*9Ԋ=*YבYEy 2Y@erYP(Uȅ5Brd<3)Z5c+n'ang>Telk8N6{m+ZCq ahk=||9593x($ʅ\,$r\9c9YȜHQB$#JZ# 3*i g;%wZ@V4;u6F\B "[rHY"!:Ky O ܂tJ;Ɲ89DxX47uoY~99;t83o8ǻDe WbvKו޽WvIg;2Uqo3^֌. Sxwչ(5.[֛쏶t:k,;vnZ'=oZۡ煖CV>lԙy=nAw9n]VB2 Jg[s:i])^nsH]n'C/hnn~$\Q{1:PT^|TQV<2Q3B $P"e?RQsUGOO'ʆ4۷WmZ .H!$4#iDDHtd1Yz[ XA'ԠSlPVP@ 1y\NGP$m)r+^?h-?Γ{BMr{<#)XnLZ8֏ϋl!3?/ m [w,'BыxKTRs0r!=C;D~!ۯWƓ(X}Gup5f.eaf[za6L 핱2?* @Z$Tj- Xpu4ڂmV}sÃo4 ˱wC+` a5_t~[i,{ ~Y v?;~3N>)|'Ջs%ꝗI*Q2ᴍ'4 R. ̪{RͺGA49P*d>d8_?ĂSKn&7]I)rhQjW82:Aɒ儬lVO$[Ɖh. JU^FOB輋((=s* 0CsVsG=FOFet <ʘ8exDo:$p,LJT ޵4$翢УzgDiDZ=8ÇzdQ "(iwg JC`hP}H̯·Tod&mYQf싅 aƒb<7Z~ۃ;6eڧ!f_fWcdb,L)5^:ť$CT++>`R+b[24/lJƚ6@C{ GŸB\U=8;]^r,M;&ԞIaaTTyVճ;V 5.pĜ"bJ c-wmqAzCfx2V96G(@䮾,N5C8pԷ ǂc_DΈH"N8q[;F $ρ˘8Ɖ8#jeI'K,mvvR$^՝XMDSݑ@{O 7qќJHT{}',/ZP2agso[IìRt5DȤe@0 ( j*iNϡ2]q䙮0n8n89=B" EVLU@I[\9+[]=誴q1eakt4']iREg6;ۏ m*wBu{5T&ñMJz5F W"0p2pąp*p%ʠ;\5)g+B۳WooK\|2w,zqؕ<ʁUC = (="IT'[Z-|kw.G7Q`R P(:;L)b}2 ,f.3Pl;cleٽڭ]9N㄂}MSa3Y¿@ (F \TIi2o Y2QP"z)JD/M(Qr֔XbmbbKQjcϊI]|!KN{T)˾kc"UO :9:iB\MJ _([M^lDߢИ(G梙W737pH,Mp>G>!;ۨ4(&e ס>yH^5D#I #J H39"4b"+xQ{) T]biI-.'av՘ j Pkp]`3ѲƢ 8;N1Wח~}%13ٹ^]^ m/nv{P. =s2O7nYy/Mf3`eA>(ȉ!hlK5fuUZKwLׄ)R`:EٺL>GTT151>ŨjVۅtg?ɳ^__^u}܉۪)n˘gy$ tB }u'u'3I v MJSOǏ0B-Nr4xᐃ+PhC.X)5 &ʀ)蜫*)-3R6rG&l̑6!ս1ޏ{V`k6~nrz=Qy+i Wm/ݮ翊ksI׿9{q͗弘o9/y539]ݴ]_:+;\ybUwwts w<2ŷ\xjynx]/*lQͻۿ;u{XSZ]F9/jt4BGiܦPqBRoү/g3|6\=@l 7@@Q|{l77~\"<[?"wlGuk +)ĆU׬`Ta SFJ@h} ܗRNBKΧXB@ɑ0#tY3)trڊѽIg!lOyu9;_Wv4+z$9rEu^(͟xp;t]7XQc5d uV[kYk!_6F}GudF9Kc+ /+ Zt-l+10=R>UdcX8` O}YbE%9@e[,EȦ ;]Pr"4 )aǬ5p2 9*EA/b"Հ%vPa"OјlXVĂRzhj2X)%|#G`56lb~ǰ8w c[-!4Wtrr.dQ~ v7ßPUTip w5$d#;ٔEeޟͻ7.;R~{_O*ow׽K;@U}Ku?u4kdGCSƦm}x+MYY`|7˶7_ś9m_dG(7[ynϋ_.֌hm߆NOݏ ߳_;9gC ׯvܶXhG~֑Viƴ`3cYa-qs5;OYy:._Yn&ꁵˊ 6 $`X׫b]+>t \+xID! «xLm Mת#DkZk+Y{Tߓ/oy-rXrV;DD@#׿ma;tiTt_Wm}]; &EnlRMòH<59C?X~|kVتV-y:~u?^|R9b:Xep,t6Qv(蘒 )EQ?sj+2ZJMI|@,j#ù 0o#֒) "`* CeFr,b!tBXͧ~HwHeG֧!f_fWcb,L)5^:ť$CT++>`R+b[24/lJƚ6@eC{ GŸB\U=C8;]^r,M;&ԞIaaTTyVճ;V 56ZpĜ"bJ c-wiqAzCfx2V96G(@䮾,N5Co8pԷ ǂc_DΈH"NI;e+"hiTCcLm8J#"eY|paܖ֭ɽ^U6tbY~{>1ȵHT}* Ď{F{FGhDk2Z=jV 5Δx5[o!9 )eo?mH89}pl-SbLAaGZ^I(X+O&'#j6UfW|MzJgQӭj r'UɷQd\. ۺ3x> Ean;n]^OvV1TTkbV.-EىEUjC*LԢ.Avm tBk%ʠb @•s6գSXCHB#vMlaW|fP˳M*>ʢ(P*,vIIZZe.FD0*}dy4@OZ ;_bhwqcMuDhaJJ#8H0T+4&?mܝ=*1eQxpqN_UD^xbi켶jao|rX&st5n:Wf`NZ[.۴dw.;ch,Xroi,^;cn RU0Y[6|~4m#Ⱦ}.T:QpKB!]rR1mw?I_2]R>6{IDztjկ`alOxllx 5B9a4G-A@zx1֩SqSѡ[F~A2+VoGͧYpHQ )fZ>~ʐ'>d] }dn֓M¸fDgp u^Ÿ-A ETXX}⚢)y}]$%NEk2 }^ ZQH}`K6"sD/k'!(0&g]PI8'ѫ⩣FkSTAJU"q%t$!C?8,ә"Gzf'NY'2ܓ[lqedDžvT ݭt´?.ԭm#܌0SiƑxF )I(xNĂhr7~oK b>` J1%RX@8< YN6p|4E0A$_sX fӢ`Mog|S =.=0{`< :3zP0(C=&bj`MWj'LjJm6[ْmCeL׃V)sٿ^uI nnD̮ $e4m GW|~kD.<}fj{ E:Yqظ|cNg pq9!ZgIգgGZ/cf୯s5)i'GvD3u\OX]bkiTZP֙6zULaSA}UYQeRअeX/M BEb NHe.@#NgIISFKn g<{&gЛRxew8ԢyY@<|4au2h?'(cԈSg</_@2EVp P8aHme51}94"C<^7]Z"J *)50|hun-F- %].X6F @(3j(Ca*y)xۥ"g9wΩz'' ۲tE{w 7JH!В2kQ & &9b "BKsô2{sh o~t]uϚɾå\޵c2xP2R%ϕŽJJSny҆ oT!""a:J&2; n[a^~DCz8 ` (\ILV!be௩9 9 S3y02>%D`yZU1?9VE@O2GDe-xFG82E쯖$ZqzBYI''JǕr"9 C̎Od>L>|u~k;> [zpeWɢ:oZU>w7ήƸ!.ר+NQ/wT7\},Q=z0Ì k?\_o|X4VpCĜoq ON+.ƿ]gA>>pji37ML[fU>! CO*+W1kxt;[njq8+ni{Eg-Å8CHX srK^i#&ӧXVk.*'^l8~<ËSdǿ/?_?/? UH03 ڍo[LۚM4|j4ڼz7y >0wڡk @}vqqXq;{Y=tw̏8 +H6|ZiX|}.S%UpG?hA< h O㶪#glKl>DxRLhpAN{ k@Є[gSlQ 5:m*M=TH!apFdr;H7TN[ #X(<%r<ghbH]hYBR 緋O{o(8ɷy4.N7!L#gd'a_uwV1B y\$ɓwݘy>uss}!A_tPt b|7(φ~:PJB//?~N`i)O.LL yNEJ&νxm߂ʅr=Oțq&;m6%@w9ɕI`[# cC*c6eھ2'UPNc(41Jc=$X9}~K 5*SWoH\Q*eL_׏?msZr-YPDх")gM2Ax7XȹůȠD 4A(2, ܋`w^G!0 Ӊ).K hZ1Rdt;N&E@xa ;ZNy,.N 3ʘ-w~k~^aohlȮK/ i:`8[)Z$R:!&y[⬩Yי8=1--oGe扲A9s}~BH(5L%~ =Kvc*/R胴9ԑh\搂 QmbX[<~"ջ⑨_.󻪿]jFLkTeᒬk(C#BkNI2 ˣQ̈́ ΀V,׽%VSidh#5h\ =ΎQh&FN.v *4#3Dϑ,yB5X\]lg$T@B3T A[{HBs_i)ְq9sa Y15w)"kB*Jw)P )0[,l$a!&n^ ׂz9g,;pp[qqc94A] 2 AeM-5R]pIߺ]=7EydxYA ?I> JBZUzhEpt{kGb̾XbF+JJ-%ŢIp[(ʤQxi@kKH' ֺd )2GҭTP`LX $2&q*OWSGH8(ȃMyܴE ],IWR4L" tM'NY]R3YrUnal=-,MpED&f\,xejӍ?H6rSD)$QB; *A1$56e]e8AE@~iMNN@u/..'n+=x3NCsyucܠ̿fy=/r':#ǂiAqxLjbad`$XjDzMl'ڮC iC tyjn}ٸӋ145KTƾn YM'qlMcMV=@ӐxyoJl|W#_@~ZL+/=0{`< :3zP0(C=&bj`MWj'gښʑ_!ev.eGCg{"_{b^HI)̺(0mӿ~Ru#lCSReTuB=jIm Oz:jU޴8}teo nN JtN N%'&tW<-Tx~޲c~{Ddmњ6޳'Mߧ*x͜޸e*\< pMc|OLy-֬dѸ<zEGxḉGϝPinB@'=z*c=a7. F`vv1PIyG{K2>(o/>H06ARzBVrV ƭp;.6FAv;g=dg-Sb\ Z%*P)d C4z 2']S?_jiDBV5X(Kr%BRDB8^N#$ɣ q+RcEzʮs͟Vx]IiVL2z,lT)X&ŀAIP&Z+|_,1x=\Eqx\ȃxu2ȓ`#su?D8ZE.mi>bRo0:DCŒ=_.+`Wa c@0)z_<-ǃ{d ؏j(tZsqx¤!FtׇK64nzmp* $EdEk3F|r>7ǟ< 3<ҷ>8DZ)gMۢ~8<=? N+&7q=4<}3/!tJMʻwhz+¿u\y|ǫ>̃h~7œaۑ DrU g 퀆6B,IҞ]nn[T0llx(֕l8 ϓӓUg^X!Y-7^(4vly"զFlA;lYF_Ѩ9Ic'tr~~?ӯ >O#: 2@=[~[VYY ǃ_89u=KoV=5 < j~::iczWZ`[JN؄FO?G)G%(Ki߹DqR jK%J``+IAM$")s(9KGf6o>D<T&KN`Jo)ٚ dAG:~c^6oSw^}{]S.>r7)FJݾ<NՏL]^J&cWh;vBLBPh=u*Vdl2JNi6%Euwh2$#A$L(RNs2Fa g;BjRX}ltv5c頖}%WWol3o O?_g߆KK}QYQ| O=%s;|7;o-ۚ3>AgCoCb_EX{dL;8F.R.:ckzwpvD5 V|t@ m`'JJ2X%^#m-cq6(q t8s f)7pY jF"";-/n<.Z8%:R#u(j}ttr~YϨ ]tf0@-c[wE"FՉz+$=KA"xiH/*H@AYM˜2HEcϲʌH,ik6JSXM1#]1cIF r^=UyKQ3qCVh O.}>{^-G;BA>jǩ;o|Z7>N\+vK8i] }㤜FF7>+0s䭭(vl]5Zy}m Zs|fWחx <{1LrLv!= 9{٭ ./i_wX>|s睅% -U?wtLTWAPd)l,J;*yv<[ȳ}{$jְ]H9@FZʌS"c#"ˈswUDF$j`vA{db#P2Q,]"gUI"Y֛ )k$hL3OJ,eKH^gT8P&/@ك.`aBLؘ* {T1(_YIBe 2qKJdu}53tn yBNʠ-,*ɑՎ`X5l 푎]ФKvk8ܷ!uC{8ԇw =ϴcJ"3اAp;ߥqXrRYE艠!V{㺘'3#u>(Fo.NG1+SRsֹKj \1ٝKO;dފv-Mޟ==N'|:?-֡w U=R$U@+-$smJұRHdKLLdotֵ[Z^EbLVN7]?("NhQ, ,1VqpIoω*#PA_F)Fel&GVf3㩶7[w`.6֒RXGiʉx'7t8}.߸Z;'qAUR!Oce!{*FPQBzօ([] C0+cTZUOTbd PO$*"кN+q[x6sXf=}0S&cd,bI'AY $U=)0E 2HԚGϝ+ {Ȍ a/:ilk2GQ]ljؚ8ةbq_,bT-"fCIG!ĘJ) vP>NA4EZaߘU9#FSH=ie޵5q#nd$Ѹ9Nv:I*'g]*\%)RPvtϟ"J%JYCkT)怃LF7&5Pw[FĹ?&z՞p^>쒯5-y$.r...bFX⎙}*!63NȤ9D đ4`=$f R5dCO]ִx[C8<|k1A^lY*Իg޶epf dX9lMp(X ֚dY@ն2I0d!Xt9eL*t\PWӉ2&.ko9KOaYR8X+5b>䑥^T O"7e`'Q{2:#:DP)kZe%mSĹ"/pK%L3;;_N/}.jC~9d>'ou3?Nbz<|=:mXjoNԾ6,҂i"%b6`{WE`?I\d{s8H+pUW(dWE`?pUո/pUu")=\@{W$"WEZ`r]y]w݄Thfa8kV9LIq ͷd6oQF(ȊwĀ#TGH.8; ,hRv_"|"}qbyX͵uHd\eRZ+e`BŔ}´9RWu[;Nűűۖ}gq: ǣBm9I  L 3ZŤ!ֽfi)/Ypoghu.#LE-cC)1Tkf X HZg>0nqBu,t e TdIkK\9Ȳf2|mD-Yklig]W`fNNRT%li%92ht͌,X2(tX7쵔.1孴&Ц |Z[3?I!%Wuץ J1f Z V:hunWxA2GH5QI@$gNcLH.Lm @ =G+QyW2zEj5v3mii1 \HIN9LgbPI_R(+W!.|Ew|Hk*X-216EutF Hq r,hy a2Aq%eE/E].<3T4aZuXmΓIMILIxVۥ4DY$Yw'뎑Zɶ$AH_0JR^VLĖ#F+}Vß-0ɂUҙUh1Uak4;ԧ|\XPXDzKâ/' Xe!dP.JZ [͙B|vYĜgf]IA邰aVH:kk3r1+5=Ɛ<ས@+> ڽvZ[O kݥDL=Y)D~ܔ^}n/Y*!ug"0SmblP:G6ƕ&Fbi8;\i~a Gud4JT˻.ύl̗9OO#g"ʘA{!euȣK(933>v 4ʌ) n|Lk+T`8HD3ef[&ܭs?nh]Χ.+H.ohP>ϖ :\D7^wO6ݛ"lp\6_NQ -pQJa<y.wuVe{DxD(ܵ^0HB2o5Q|0KBe͈N-JI]5fBS]X"16$;ϵS CB۾٢ [j%C}gC$];X[_NNn[s,]PgZVsr " fU!?XUR2Xf^D%|< d:;===|mȣA&""#vx`]tF+R< O.ޟ6tp>3r,+n3(搕`# )xΌ:Vߢ_y{`שݖA OWvC԰l/ŀn?KT:wMx[0~cK+]"pLQjO.[_ֳt7ɞ.Y2-KGoEXLEwH0Zwj66OK8?.LvVh"l~8^0@V*=76q;]ܱs51usBF򎎖casNdp!_=|,\r*Óû'yw%6+iJ"Jx ͺ*$u6qpVkMo'lɻ @n'qANf Ȕ*>3ޔ&!Rhɤ`YdZA!TJӚn<*ݳvfEby]7VJ~S^s~Jl"TC 'Q~9ҀVf 8hM'<uăOԮ>Ti'CGNjG3>N,p IG GQ9OJ$<3dVl7I*a.%@?@ !qIj}ԖhYĶ%Ζ1Vzޥ5nx-.h}X%*dhقE87B LˠFQcsDI5]0r`3ee1utV1V .Ш h :ka6eF'Z,QݿDpFHqO8+u1d Q(8-Esl6y-2zQސZ68A EI\<Qjp%|c0D<۝ nPܼ>kN{5 .f01Ź4\qI & )i",=/`ޅQ1܅Qԃ^}tY^iO,06xf)`~pc9Y<`u)K_4()q21p:A=#<;H#[--ŒCҭ.0REVKa8nGi$\_OS8}j|u5P%-`p(0f`]:mQ˻`h̫K76M^\qJSH_Qyit~pHluͯm.|w1;]|f"5t>^욣mEɛS7^ (_#czvcO(olS76v* cO,Xfxtpv++{]>^7>+-ت׺l=%?&Pn}QzA>&>CrS︩"c~x~y秤w?o1s{:@&,,o w೮tm[v- 56z1m>~?7i̧Yjfm H/n֡DI\.Cݟt5gҪxXHWqWHl2y=a=[4RTq WR@!8/Cٍ{PFk\#]1O{HRP2Ifg疈 yN%WYY"5nULc.CP%u+ld. x֠uA gY1&zK{J(6&n:唵N+;jM'Rh-70«afc/%ɘx q]6YpaVͦBlZ}!AŊ!XzR'S)b- ַDoPsmF:-E/?{Q=mĽs9>lb,|[3QĀ!\+r[$6J6?MH]b> J5h43jWfO15l`0m7gM~nO>7З"mNb{ gV: ]X|j7irj_t4:/TSgT`1* H&X@X] M ߯_ͣKׯ~O޺EG?r'' LJ_ UJ]O?n'vIvҬDL3F2IVz)*t6v(ڸ&ΊUOC6^ԫoĄ>h( }P跉fO_H]BMmS ] N׮f^8lXs ~|u3 7?Z\utq2/;Xys39Q78` j{3p]V[3}Ry9P%:fg1߻d_26de@*͒_:]P/Z|9E֐O:}+ 1lb@OM'P%߽g༺l8>W+јX0M{UXXZی L2>)QVx}ND ܖ9+9~4 z*8gR;6 Bs@4ldI!) I_gc`f*~ף ܖyblշ XlB4h*2h brl54PfZ= c- 3FAk#"Yi( 喋MȦ\(ѓU+['q;˛)ڛ0a 12W̑"˜;@d^6k:9f: a 5!s8Gsxcq14IK$h#(jOl)(B.Nc!N$B"ku\"v" btb]3v]mߪUB .ƝÆg2tCJt{pw}͊\xqm8)xp}wHdZ#; @}v^fwt^fZGS m%zG7]QB?: *c}s /§yp)O@'&vakA"|mĤ]&LUR)t'ą`+%K&f;!`k1ŞbrO;U7Ӌ#^ǐәQnu%9Bf&x8iݦpL+ٜ>4垜Hwv%׬87+T>fG/HHH] vqy.(kYziZOO90@+pX?ǏPWh7Cv mbp85p$Ϩ]'֛\cn-`-f)>J!ŘMT\1YMf4_31xĜ|3~qfm߼&=L]{acXgw7 w Zn5uDA`lM)L ' ĶfߔSNex1vUB{&cN~kxьËf,t6o_픡isϦ϶p -DJA HAobr PbjqlVI FQaвן?U!Wl`SI|9*drM6j8MT>^/y+c al4̧W'buTB|ZgP \oeW#T .,,']cgEY?`&2SH[K23ƜZ:oǜk֩SmȮP٭ѓ gڄ((px/L}a}A}a?=eM1_5קsxeDF% k\9 2w\kFjeؚ>DIC=Ͼ>Qu3;1]59ig26)"͔Gn͗^py*^{옽`߄? T*Ĵ b֘l AZ”slFVhZ I4BVdhA%bJtr G :)TKҵÆ.^_q} v.N#8#gmw"=S̥1{0EA~LEq@~,-_V+ 46bq$pF&Qt0^r)I;h2Y(=n4-ߨ}YV:͒gEku_&X{"IjlŪ(P#2]Oe͎C;Joi:nˊ! `" O?ׇ4/ u@ njd [u9}4GO816lbbj,Js]d-qkϊ4DI=C\.؏yfԩC1C#A7DoCԅ͐py3$}N27$וE7<"v1ݻ$֩OWb@lH)seB6ƿilU]W׭-  g_CV؄V;[u ᒔ·Xj֚!&7tWc_c 4o3W; =جPbs5L 6=ʤKߗ;3pnxm >|}Vf/'St):9>)CQ{`ЕfS{R'\^O1̎!dp/B\rMjSHb.((W #j$abX#Xj$=vRc b3rAӻȩݱMlH-5Ke°LlIӫ]B|V/?6bu-w}~-7um\|n4:^+%\EWU擻e4bCl+G( HW^yvĩp cnO2SMW[g7>ʍ?:$=+N gZgfmN8Groӟg7W`:?x~Nі{w.;|pwz;ϾMiTnx!ko7kƗŗ'wM{͛|+Ő#0V[g@ d%h)b b18q求ͽ9m|*[ I 6)Fuij=&c`77f7Xi!7^ Uʨi“UoQKQ.+QT.5mNBq,,Mh'lM⍏U l89\S7S.r9  YxޜP,}-u5bıXjijUE@L&b䅨OM%)DcY2w @/-Wg{x^q ? =#!XVLkq:2%oA (H00ix wIOvI1U)TNeG#cG]jjo\Zx9^)<4r#JL9 3NC_"މ7θ\ {nkDv e%ޙˍcO6|#?UV G fE5"xCQBL@Eƣ={4c0VR²(ܡ#^L>)sKk P2Hf'#ŷY#OҝwQ'&_zW؀acgmI %ȗZrV:!|}|97 $YuYzp\يB甕ē]jӲ dӁq[c@O!-J#`Hm nmnC񻚏whl8 '(0!SDU1ktR@e&\KLGgMt4՝D|=;Y װ>2dѢ&&˜)'ZE%$+}):ZdKx 4>Et>@4"̠;"E R\-L*ڭRafXTqLy%A5AP8@2\܉=3AចN-bD]UZdbP✉ا0RiJhRTJƹ4KD{vܞڷ5(DDKc@Xc)4k̘{Fjc.+%~$M)2cKbKA";`!va,|0m^?v(v ԙ b*UQ^ھ:l!|}.30(7ɰb%qx fH; /)LWspe&>E@^gG\̨{*P"a:_߼NxuϷߜzuzדo^ djWMuс{Mɻ4j5M[4&G=UeM3#f;kf J?﮿_[}.oG3']:MLЊ[ t%q}6rq*rvWaI&r>q0\u{::x7WBO)RXpa" iI*zW}0yi|a#S֊i$5}URj&xL'DjFb M<]\gઽ 0Ta4@Bƒu{>@:;<ˋ>8e+#+(%Q[DԁR tJ[B'#$Ql6Ƽ柸;պYLS&Iq2f<42}8w%G˸=1Mn{e9:j] Kӆy;H%7iQh 5(YYezցDz[HoA"(4 76QHgX².|K*dYO]9&rQ&Q#"( -٠\$JC^.!QcYf.AGZi)C`>V(<Q96,z`paqP-"'ej~db̦SLJ%RЈteWjCX3(pFrN[쿶ziEL:/!Y ]CRUQeM(&JK^jQJ' ko=^b`T.^xl1u3Bv(3JU[91@ .fd&_H8+*4r[Ƣc\gBM*=ȫPLJMSgO 3x`Yk( AHX}@.P *u4 5AS|xR\ !ԸO`.`n qXNÏ~tFyWB3I4dDʈW+8.u|<[ώf`jKf'.Î ߃ыe;݂]mSUu#bS[t3w;jEa8U3|JMK (*˚6fVpYTz=<ZSi,}aNRO,3wch-{]փ%`x;5 [ Ɏ<\әWߙ ~q79dD98&6yDJ&^M$0t5,#E` !l #N:žnk v}z*XJYNm{C1DheI!.U~E.9yK/ ID3CMe^^sAI u0""%ĥ#;z kwLZUHY-!xp.kM1&y,c.FS0l;+jl~IU&ZUVrwu hj|yBL].יp]Rp^ V;w+wra]wr$ TB7 @JeUzq0Je$hų+%,Ժz^͓+SY橒!).>ԙyMc( էK*.3i743Ǥ9&k&='e36tJ:dU}JsOSNܘ| Pl߼&MdoV0#*)dFR˔Dcr&_.߷,=8!\B͹ܯhqhq"N@UL弐パxIh: hwPP-vrâ5Lr^|\t7.4D8ƴwQGm*/vY8lcgպ-ßh$!-NCZp\u䶸G] u'p53kZ*8ʝ7\ \@cZ1+j$~Z)Kbld3{[UfoݜxN4[)= `eHV 4K[w=M>AJƧl>8L;_W/Τ hʔPXJ^Ui %5l]3dw՗D0d3%x2C[Dm#W.p{C,2<}-qGHfȉe)2 Dj[UUu2Vst 맲e{[ {D|+C[{,gמA貛$t\rLJgfwE sLȲgH2z +R$3gC0ZGS-:}F@gRCZ1 rڙXz#c7sv#c? yơXH&,<*>踽k&-ܳ}uWS xyyW;grYAMxReWT\JjeeIICI95TВX@@M(;lMB*Of*-[˜݈F9jO $WXb*Ul-AGGE+5Y,\rSU%}-II>[! 3(Ytn;~ g'Q0J^-*KR{lfn<\$<]n8D"b茈aB x1f yRnbIPOD)KDJ`tE%lP]cPΤ$رc,4d1zΈ݈͜g: .c%V}qqu '\ѕVHj.3X& pl=<j(Ox\ vCPwC}:x9ojsEpS+k{lJn<>S -N '&ah-94@۩]hx"9rQDrqHΦQe8؃2>行ӜEAG#P@rH8W;2a<+9}>߮0-ᲆ2!xjP@za0WcT5OӋG>F7؏hzEDVN(bTZt&c(d䧁dTD&5MO`C<)ڷ f9d-N`l!hF gj}^W[&Xqfq=6~!VKVկx_9챩Eoj=&0ˁ$fzm,yם.-XK ~lL)u5 bLM+%*Z٦< GMľIT<ۉw=9wh$Jr%1іR=Z%*d0)XjA "|/Ry4%+IDˊ Ar9"! 0{/eIytJS{-{2gޕ˻q1\_^;&*J+/nh\d"_Ry? oo=zڇr(X!%: ђ&lIfbԶJఉԷV`td F$$2c+> -t-A޵ٳUG 2Hq_^/pSc"7=@5%.k7x/pChHR#uBltoΟJF#֢RchFM;h9!> f.Sf-ٱUrc7Wހ5 5<][ Awmݝ? qŰ͔*_7gÇZ~ݽ6nsyHx^pN"Zf+ -[b攁/[\Nej[˖X6T;e,[AY{s\NiT:eʳZ A:Q_FvFvFv- m$$5֬(J.kƢu볊(rw) . }[i(QvQs..U59pHSS1 ` Z2I>'8͜=CGTglU:cY_.XwF#`~vEZ,[dڢSm&7BU%#/76y2!9Uh S4En gy"xH(h5}ME2&*!fkR؄8x]TZ֟ Qb+*U 8c&^K3Q3TQ*O޹$:9κg-@-9G]et/P#ڊ\p# ־(Й\w3gωճp3Ţqy7Y0Şֳ/'}@+$)!<2DYdVrm$Ҷά!J&D8 "4_[% GcU $ԀcʾH:9THq6Ec@\tE1RXٔij\.*pĄ"`VӀ$-rjDs}n!JiRY(6E0yJv϶9Ӵhn;>r=WC2]q`lE '0c Xے.ɱfs ":&<׳e3![CW<*SE хMf9gM4zF *<]/ߟo>X )qQQkUe9:F+[;Q{2d.I+ ~B}# E>[bv_у842˨"E3@90_-W=x3q 3D2^!'rcxNWt)R`gO@._DV% q[RV?//.nG]37f.&Bg,Z-G>v5qJp3_Ӭ`uMWe%:wL)ꌭot?_Z.NӾӗ nБfD5Cέzqgyдɷ V(TgC \ٙL\dKm!b!)z{isw<^ Xb*H<*G3TI%'ꚌqN[CVYu恹b7P^֑ND5t3sR?A8B?:"ZvҬp4'66 O j `+M;bcE޻|Un/d9ΧB#SE:3bj.GϤRXr %5*M!c^5Ʋ+ 6,PеU@^نcNKV㖯,?ar 2$V\Kk )%o`Rt";Q`$A֛Aj~蟗l.Lqv0uaGsWl"8T1).[YEe:LN|S0W7.3) Lx?u'8F'lRJсbb3.;a{,L2|o2%5xChfa!pYmdν)b܊ΩC~kq5w> FսߪY/QϾovVyzz|49M.w_` xS|Fk!-!xq']=./fFok;=pvepDI/t!̱v4ãîg%\%C /gEѢk8z[-)U͈HCMÄqK/|<;zrS|e!WYER4|6>g^yP,Z[njhLѨ;=G'4??~\wnh #- {D`~hr]yҩM+k>7iW|ޜ>.OKhbem'}a~<%q>mwʦŚIGi_!iwˏ&8E+߮JBxZA TvJq[9U,q7sD;'J 44\$ 3$Cdj"D&gfU>bO0yWsauq+!14<8>ZaN -@]x @Secg<R6>;Ͼɵ.>LKpUô&'%\0bKgw.[>^q|/Eؙ} o_u.J咷~#Y8CA+)]_E>Fʲ{Zg Őnqۤ]Xp$ߺ_z'4y^:yd*$ qϳ hO1$\)ID|pa%å]ƀ.8XIl#S_q4u% %G{oS'z{zݻhumٗ<d䔛ۯhz0aщue&"}VYpŀ-YFD,=Ն׬V}Hu奼^zrPJaJn3QeLXS."dKp#]h3;L 2RI8ό L ا}L:tB=V:0/](|+s>\\|OtIUzi*H!,8 G']0VE-r2䝡Ru4 Š˗<] Z|\s+Lg mF=N7s4څs"q݃i [^R(7(MOVrlU,Ih8gt&J 4n#sBsYYUh&-A]k 酀yCWd-IʅO\rrh}jptGaw\{t%tT<ɴ I/xڨHVDj"Hlg >`" zHHO!<$ J:% K.b>б$PP>%ŵc5uFRHcbYPT1dMV(`!X'Lfj⬇D׵.A{&XH>,KΗ%xaK(h"C-APաϱ Lx||(rFi7VyB9àHYyO[Γr+)++Z%K}HGPV PEÝ bjk972/!IJy]! %B)', }ɍ R #D# *#jlz|lNN_͌Vk(5-롫f6=٨fK}7'_(Nš׆AI,HLZ} (<όzKIq u.x0=Jyy. skHO*G+j⬟Kq ǧ48l?2+8EK*o=t=A꺠ڻ qkj&3 MJcAA)Qv6ӯ?ͷϟq")B C6*Ix(/}7t5tEQJ2ZĴ htxFe B" $L Vjd J7)(^"Jj"f.s]?[fz o~ɴ.][6O&jN|mk~r=\(5MwuŊQ_ٛ7j˷{o7 |CU q'Yb }u͍gk11qoIVǟ_iReA^FIoj| ScZ%֛cJ.Efv+ת:j#=̍B5*TW H %R"*XP_{3Mi+V Avi|uȎޔJG!4&CKHV&X^ [,AKI8LDF¸ibXjF%+tZ脳=|։[@WsR{^j}5JtKh4BLj NkRi,A+RޣWsl)v5z䷅6,αm-t,7-m!=6h%a ( 5ĩBu['nLRhՌ[pku\F|dї>N85A.vy^SeX(y&b3&5MlX-qHBjׇ[w(Zn6tw_|Aeu[j%Qz $ȥo2,Ec b!Ng˧^DQ^|C)nNZbY?QMJOv>`~dfx񓌓+-kQ y|txCs녦=Y:7p #`i i5"n8(W3!x+= QFɹǪ8S)Ɋ8TF)Ccr9+os|&%U[3Vg`jX.62 me]h{]JƄYA+ #v'7 O,.hp8:lITqQjD׈Fm|FZBBÐ*_o4r'KBydUf1j%9dSU"=RII.FbB\VY#VgzH:^̳풭(^䬮^ݯ׋^\XŤǬ * 2^j\$ #EEzI2 .K{(zqWaq_}+C~?}*b g R {?nY+1M"ϥX oz"}X -$Twn/ۃ QR}3̐&5ܳC` 7*!44F,%K>ǾjcgZbw.ۡܮn'ܳ}8h k1YjD,DW@<,"KϺ;CI$Z<$#DN:dCH+Mlb39&#sm_K|wb=7'syyr˧oKxFӷjAOMsٶ< bzzz-)uرgC[qsw5q8(m sާO %J}҇KmT*L+;-2ex|`~/Ўr^])4#ꀽ ]u΅-4u(!.t wgDW  ]uBW3SBW!} "'zҿn\Y9y跭w^^f5|=2wonCs?UF=0C|jïXѷw~)&7g>%8plh~4(؅_!M܌ztBWm|JP}CWѠ8#Xt4RWW^#]9*߆ߊ8?~)}ߌp[)Ҝy(4õa.y9Wct8,]= m/Q8S{AW~C͈:`J+g ̅:ZNW ]B!UCW l誣qtQ.Uҕ=̈3pm ]u/{q(-t m!͈wa]uBWm S+EIztDS0G ]ui6J&kNW Rqk>bxWmPoJnG\7|OǫͅI?o&reGY|W!/.,nRDӿʟ+ǦIthL\umkWe<'uǭ7޷k6`1i`ihj\2I) ⸬ܚR> kED,8\+uSn&[[W|+>o<1UPLyJ6{I ۭ ? rțuR3s) )צ0j. clUJ·|R?d\([Dc~ ƇH&|[tE'WkO֢ˏ:*bŴkmQx=lP5U(sˉm"lMc1-Ռ`!GCQ Ę( *)\v*aZhphmNWN ]oq+:hG7<‘;3Hާ5J1RzT|nרQQ/@k  F8',VA8$5tSA1D^Ba@+%h5:vk<+;$Ȕ<%rO=BJ&:b]93-H$hS<)*6˷,T_)tBճ5>8=M(bL䢎 [j%RM{JW?@By_m cQ2MǪ#n5:8Ad@B%aeu;1Fe P,->6!;.%|߾ Bwޥz{Uʽr`]d}M2wu$I8|Ejޜtѧ͌veoNCy׿v7|)r&p]abXTFbkL& ͱZpJ5X]-XZ 5yb3Gީvɬ]}LoGw5Yl=*Vu,G>ԣ/BSlzttz7,Կk/'_Nv44V2[wtۿ여-,w]ozGK;?V[ڒoCn1^<9.J녔*0:$ɪktEuT{Ú栵hS|SUTtk౹1.HEk@fin {N'i{WN=k~|u|*mK=fE꽔C2sڋ{n\vK)&2e-iFL?~Z7?\/#k=#cg( @^ęf>S_0Su;\烰W^ShPJ NP qBR]8%Bj&KH h"GH"ג fjM`rm9Yr48{&Kj9ex(|i~|r|ہZl72dLoW)K1q`Jdw͡%ZB7YP&Rߢ=ߪd[2T6]}e bQGʻF E'y vs!eb' ]+ּ>mwyOj;8zvzJ:ev/fuwWl׏:'t}mwǟgn⬼G:8$\Xt&e QS&0grY Hq%YFeژM@b́UKT[b[fe)I͌cohZe4.2B igD.>uenH7O-~^]zͿ=[!TX{4XX{Q*`\_@ aZͅs4[-|#D*C@K_k)%pTw rG0nJ'Ʃ0hq(#1-0}28LQ̥ju!5*9Ƥ<&3Q1E@xtZ芃ж3\YL('-j4ÏyjV^\W5Ѭ@^3./^}‹ /jO$6A 5UUF>;/bQT|Qy@Lչb\l‹‹SѬP>g?ku}!7s䃫x >R7{Ov&o_QlrH[_(Y;ُDwkDo6cSjw)EIf0`А; Q_朩^)1N;q{x"]jI;FX &٤TL*"J%&,p2,Vʤ6Rkj'b#W%P :_sW{Fjpml=/O(Ч4Ϯ5ۇ1WOӷޥ5& }<|-^~z[\<1^}J_uS!5qƅ*JKɷ4!;UoPġ`54"Z?Bz 5;nqb*!!rod8DqNKR &^jU‘e#1_9tBX/Mܪz]fܑ'GfB.{ALKCSCP#58e6վmQm52YStb'/.0 |Yp՚(jaaJC_ Ue1l* #O[jՔzV%NzY6HxO*-iJ';NO֊1օ8F!\|:Jh6qPM߸C-VL>k zr!iM 6 b }ɥ@Xm\Az2{G3CFvTH_dӨ& J,lUY{K2㠯.fȕr5{7blRwFC#5.~Zq=e؜2\kM6Bg\'$,&/l⼒&d}ݽl7iki'ƛmP3&!,2adɋ~u`\bMPbMD^t4mSM*/!Wjme3\m`󹸤icPBhjTaM̤,2ߥ<$Xp-Rt8\ tЫf%lS|[|,$ff,&7f)n¹4KQ>f)%Y+lBH O? [nD<%Arfŧ!M-d8L.6~Jgk'V>b9݃g# U_&6b Ek48jhD?] ],h }Ci"W`3Z jB11fkٳ'92Dͤl1vuZ}8~ڛm}Lw:cgqemͲwJzY!@\}CN4f(iaganS8k"yýH+`bI 1S R\} 9T{U19J @\("I\)*bRU2^Uc R-#2cQN+/ p7{v$Bѡ*~wv1ܡg1w#ңsQ'w\|''f=Z)bTwf\1_9B rqޗDuE xN/ׅzvR@u][S#+ .VaٜMU%$CQ c_K~i `!XoU:qz`'§DHr(0 %Z+HNRˆ8;>%7(z+5ºjZGC6$iϮ ?Ȇ&xiG+WN{сIB 5Y Wo8V7QKY{iU?[5 5p@ (㮵eCyւ N88WJ%$k^#J/sOWQj7 A e£UoG)?}|~zFK}Dd_Q"/t,+Rh&@&͈iW >:"C>z7z|PCqIHj%ЈghddQY :eAPĘ\4*gU|51RHYr. Iиɻ$49;^Ø1,d浬h'> $ AKq(h q(e$9gEE+7#VE$x?(|o2B5= VgZEu΅0-NPaE2 H#zi&*.hGASd ^pCPZs ]hȨvk X95VVHpk$QCa}߷ܾiPjt ,MI'JHE,gMR$oVKB R Rg4ϕs !5>'T|,owL=Ceu (r0=77s)B FIөhMP[f_\qg>SF%4 欿ɍ8?QQ׿]fOSB %y<"H?>w=J?'3&iܩ:UTu]#;~~!?/|2syqW`=n 'u${1  ߳OO4547}h4twW49qmwR8||y?uG>gICӟ|3+f+~ٟ (U]TJ$TU6*Ġ<3..!= bKnjK<-E";J1ϘDRrаVIb.2DURFOADHo}Gf:Q#ϼ)'pE`HL!D|K9J, Dj =i`2㉯9;osa:H!@j+}p/YXTqhT(RPn 9*fPBr 1PUNBj3wP'_ cxgyQ/?rU Ƣ D90D(b 1l-SZxB:(yhZo_ Tt6^5&M+~Ί鬱o[@ J^z#H0S͠fr y+Zǎ˝u[D]WB5r8SWc;h{ .俪|縺Kx(^->4˩9}A!+SiR^8k(D*SAdᢱSBmDm"'Ղn 1کqDPFF2J $F}W;;09;yѸ \YK-7Cv=ݎa=x=J-?wypIf5gӟ5ݵa=߮B~qw߇^iU_Z@R7t VT\o:UVcNu:իыWWO#āՓF]Qq+uZuWO5`oH]Q/?\#ߊBj)5Ǯ2TWL+BRWड़cWWܽn/RWHPWJnZu +UO\U :ߗ3~hLsC嘬B>Fx>ri+dl92 >+t,+R.4#ƪ i3{ zD$ fߙ|OdO=+ł5.|ӳH@'C%:RA( *{TEk&#M-Q#F5:Gӥb̻ AKq(h q(e$9gEE/# oa Wr00Zeh;tSoJ4(6l4!C+VRM5ܨe}ۏ {oQ'Ҕtģ)Rb D˕RFkeU9PJiT R3Ѿ  S``)>dR&qH͟G۔%n*4L9}-3\ƘbÈ}]9or,; X#D\tF圫Q@trBaPN9ycĎFOFcT'Q$Z<8j=!YmZrQb^qA~g~(8b:lȰ~\fΝ-Utpe\=ݻJA(f$:g>4gK'JV*'6Mp-$/T7B_tVf4mݩ6 SV=X>x?_O?8[6Ή_^]W+aKdɸ:bGAHHjGrHuðaRkYfsRG U;߻-h1f;49AkGedI֍Z;WtLB'#^s0O GZ,wZ~T!OQu6C5wz~짏/)3g9<wpg:^F|٧' mǛ ͍>4FMz໌+r͸GB|J;k)@_t>M#@geqNs^wѕu?tOlD;i%U?R% lA 1hA< ÚO >Fi+eD_jgLH"QGw)9 hW+$M*)@ƃR"ZdI~Ҿ3W1psk{&MF[ 1-@( , `UN#:8ܽ}kkh'Vb dH)#:gȕpNOJRr;B%v0$d1clݔUU~iq~d<92 梹/BN^PL!tzt߷}Wغ݉||ykcf`| -+]mKL5渺Q[1֖[a.3D0` խPJG1{J҄)QuRETKH<Ԇ| jcTc@AdBĕa&R+FiPM@ɒQ^g(aQjqe4V frg_U~ЉxvFtg_O &8wO~vw  ֧~TMf`P.Z=d"1 5QxWgW핈\fi~Ȗ 9<-{܃og~'~c :[6QNJpĒN3v&833arD.:@JCk.F-DJxt7M b'ݳ91OB,m뾽l}݂EU6Cˋ.'}䭋㌎km /=B9¤M'l'赩 0 ~sA"znHDBw.S2{ղ6|Kh|m#罱\K=PL Mh/aCwQl \m4̜n:$H$hX ` @A忍_g'-y\G̽[Z2eb/y,&7àYsۦ7X&RS'7YYFq|J昌Zh'OI0/gx̱\Mb2"6|NS Z􊸈UXI|ơ3d5Ab-"cԪtָVFL.نs9;2;/9nCK㉏OE'w߷?HXrdz]ېBzNxSx~yn ɲt;z5CJo=zңGO>^ŖE+ꑓ9]O?~;yȇC'wÛ:?Ƽ˜W[}!V_r+r'i,nݵt<'wG/f}T}|O݇Шf4P+ r(kȑ\$F2E@")uE/'Cm9m՚ϤG}oq~R~7/`RTı,;*3919: ]7uq9´i ){5sK\.1Wɡ iw_>RR.!(9Pb/oqB+%45&qQΥkFdcJ-L6ŦYjbft.GաJmըЭ.J% .G$bhVV%=jHY28؅$f_{O'vmݨtܿ4? oYC{irGW;Fj0) 3IwzǸjI[/q-[[?<w]''/Cb;#=!-|Oag'rMVoMS z>O``M왦f傖u*zk̐^ʢ,S|UɈcXE}5TkB#Jѵ:c1X `פ[yS[G`f|˶vXg/"H\3z)6Jھ?sm˱xZ|wGן{tqv:,W1IG`;ޘ?4ۏw0`)纂TdY+C:fL O(&^&Ǒ3%h14i$M>ʖeՔM)^K08R]Ϋc'M$}u>ˣ7*jH߾SFGHp͇x F(֕HkZB)6A=Qߒ߽)A6='yrhUQlwTplcp\.nd<<m9lE%?} Aa\_z<;êB…EqlM0T0G s./R\!rdCvQ;!ڜz|,5941s 5:23g"aXeX,63 XXVc/o\}c/Ejz_0哓o'NjW)箪QR66;.K5JCZ&qJC}H AЈk\-8fl.wxjl QHB+Ҡ4Zc9#65y(Q{옣`_D<k*W|%C  J-9VJ *CVdh};~VTGQ|Zbݺz9_K4x(qxlD8Gćf|Șcr9Sb.Wk D *rL, Jb1gtC#"% GDzdDqJQ&mQd4Xzb8̜G[Mis,yd\406.^os\P&Ԟ}AT|v^b%FVORsqIa q BN/1Pܜ!ُ˥D DQ]wDI&LJiJ4VfwLId:& sL K { .+"T髳 Xzӹ!8vƓjR1\/TƘ [k&+|o oAjpm4ev}IOi.?_c?\zShȪ8hZõ.s"߳^F̪ixu_x[K/Hts)Y8tnp.F3t8$(!IS&$25&qBƂ'Ixtplm,I!? \@5r(F@C[6{wSt-FbCq͎m9SH!qHK^ 1.+ΠѹF١Hc0/M-ߐ?7ZPwۭzEZR5h仠0J&9A\). ,*N8{0ϝ\q΃g?{6Wѹ :DC2Gκ klt"'tӍ0 {my:HK'c Bgrl9;ү l9Q><4QCQ?ȡPSlq–JZr"9Z(]>z'F2qz߸j!Vme({ӷ&AEJa;x xP\Vx,7/S| ҧ*uZ( >f'q˅ϾNkLՍxNG87N(^&ㅖ 6( 6G"~?6Sܗt FڠW}KFwuq~Jk~Yib`d,XŢXt5} =ь<i,SVKjTOMw)vTc$gY(A(a0ђX5cnB$0D܁#_cBh DU:f]TMdJi&&9'/m|5̀v+o_)y˝wW3zzG#;^';F<1xwi0d͊//[2slDD ЬT5M5b (쫅 !㪘@TEh>r3\\.4"RٻV7̜T҂F=ztJ<>R쨟nO?:ho6p×:}'<ͅi1LUtRSեlĆJV3Bщ zZg ]\ 셞yĚ(7p)Ō$oxҷ\cls%45y OO>XZ [ꁑG}nfLrlvVkzoh ^w!AUw'KuI٣,Q?0?-/A'X?{bP18t2tQEFgvt:_0r_=7fեG/8CNO\!djcxD`%U*@kSjO3}<GHv9d,nNNi#/ mr,WwнtQ77uèӻOh{ÅY3g6n! m&̝ѷ]n9cn&Ծ 6٤Mo.=h3vtL=,x7g+7IW:8)o-|;cF_ +\Tڎu+|вw҂bHw'snu}@֖9c]>!fCxj^5ڟq5- x@ g5cn5ܜ31H[TZ1Rk>F<i9h0߸k 8 gUxRe?J ̖+Kѯgpw1@51isW{ 0"[MXL@G93b5!&4@;9kH<`YX:y(YR$NL+֖L+֢RZ2Z*ΌݻɦZ6Q;'bމ [Vh]!=uL\MHuvr0sn_w+%1cuS=7jt"'6YP˾& ^zO$mv XFx@k)eVJ"*.hV̶l*e΁_S9EJ|lԒ;0*%Ʈ%`NAj)}5䬰Mo%;oL[Pufg[MD$b ]؈%nj!&M줖B9@qvi2)ƪGn x5}WCcMINQZ($"".16M {X{2WǶz% AKIÔTqyϠMT/1&~-fRO;FzIs*8iݦ^qWZe3٢Kb7r[,t[wNuM^-;baB4SbeTS6Y-y?1\IWn)bW(^ηfdzV7e!;k'y[(ZN_\>e~ʺ/>mkrw9=pߒco?Cx%_vGON.^kڡoWmD{Յ`d0aZ#S{<ū\x4ūژ^ ֭٥Te=ԉ\K6)T޾5]])h !OGM<$&eBM(P %I|RI׎1_1E"`I%`u4K)W1ݚ |MMW9ouI Ǥ_{ ~ON ُOK6l3>PfC60}w_]L^QrV&rt~TPB8Y04Q 2yB*d[l.y˳vnY:ɖM+`7\lu)lgdT1>h`')b/a&؅:"vB-XLM[K\\,9̜ Nn9I+QsD4>d8 lLw] Rv%i1j0ɑlh%`OUcd{)>%B%Dxp3@b>](-.SY`"3O}PQݽ}7 תO^ׁt)o-]i-Lں?v#Q*v pYItFّf IrM&E=-ccVk#2};TNG8qZPEjj 苵JS9`o02 y]c!.Xxҗ:3ޣwup x{/OV/Հ}89^]>qΥd}|rK5dREi`Vr9&&Dۢ'@/COJu9:lZbΤdO4Ø#0snGl:;;)\P{w,0GUZ 1-P-5&Y1@ TfWJJIr05qD,hЂfWb8:qG(}Cn֤Zx8̜poū;0]1Fļ ₈76n-PSǔ15w-A#TS8(PDI,"hn,-vƁZA&Q/Y3iMnq9#עW۶\,ZW#qH.(IzƧ\\MQqW< n5w?o=G޺(?^We"~}^Jt8:^)'z7\jD㲙C;iʝLyojiLK=;y;D5H5͢ޱ4lhJ QIqͰsD_a;RdHVҤ6v57œjS_YTmV1&6K·Z2U(3ޘ3Ro (sn'\QQ]\=r99=|zojꦾ?ao19'T+|w>2a>q2qއn΀C&sև0Jkdj%+.wSظD۸݅IolsC&Iq/ Z Ӱf[ Hh]U;WzK2m_],T,,1.<جwy]# A߭dCm =&rk"+N K6ؤ*{E>%7 63AU&K]ԁ5av5qF5quTҰ5sU()fW#KXjPѵDH܊+Zi6d"Ձդ:Ca5u0Ms lsU8ÖlNIqDF4_$(sng= xw.W˙WSo󳋚)7KQOؒnsP̑eG/w/>kK⾾xee63닇xwggO]yy;%f $)Qk-lڒ?8.7Vy/NN}}i lLh,K%ELƚmB$0vGG>18@\l 8)ɋMD6fീa O:q=)= ޵#/{{ض~$sb0~ >meI+vdIQKݶ"lV~z,cз-S؏٢EׄUӉ_[c=]1ˈ'XN%(@u)*3h"dhLa3Rhd"^>,'T c{B93RL3q> nT0`"{&C J.hˆp`NJ7bi1g;Jaظ :#t*,KN#4<-+&pZ'-vw"xZ6e>ër| Hx8R*)x Di0VaoE:ʹӒ8xfӍMf>Vѓ%Ϸ&yW QRDHQJhiebHt;ppxq{ECȒɓuiߟރgMpYrxב"bD( |Q -a(I n)WDU];-6(I!k\!ڟ;$ApQ C1\|䥵|T>| MZ貸PAr!+F0&a2Et:"E>\sI4h=hhܓFȦ]+Ĝgs{F|:l&iػ=wӕ/+6969'w% f'vIOu/rƚѕ AN]{.]"rE!!CC\|ղǕ XZו[#"mi\;tҶt]͙CrSu#rLvP0r܊&ݾ.w]|؟M2s$48lAQVZ {\ox3]l(tgIjJ(.b[i{)' Ø1Uzfg™scm氂z]lƸYbnGݚȑQp%'D iB2edKsP]ʡ\*D#>s2>R߁9 =H%# ɃTFCwJ 9q"d׉K;ųR;v7Yn+?$gBz˪`(4_,r@9bG$ Ur`-2\Ry)#r%{ԠZ|D ȕZmc Oy=DF9 @  T)rRwU$u&LFҎY\Jy>%nKȳӻvE΁QnKizL w[ `wkYй[ 6ʈӒ GF(DGN9**$Xb*EFz(W:|P| )4ꉙ&尩H)ԄIE$s2JaQiƝV2E (O  cR}0 urct<ŇlOK#hTb$WX6si8A R,z3ty?tM'52r QFcƀ ! Mw,c)w4k̘:fZ ?% N ; @!vap~;_V)Fv>犭LݨvƓH|^pfJba;w%v }ƆœWN5yv!$".0QF8'Ep eSDNL3K;9@ ^.g@0[iC(XJX.52$Yo`AWN> YY42>p]٨zJ4+f[_%^.ǽՏJ%p˔E\mbL#BrbΊs8BV 5y2ݪrJ*_/a땥veW $ц?ϓ ;7 Cm$[Gb|HmÐahfu|6a8d`wih8Y,vekrOQ =bm`Z:M&40hR)HX$KpbT俒bS%wt*`O??>~tߟޞ>u?ߝ~N ̂ᤍML¯#?w?ah0bh]κYCe]Neܯه[m-O֊(\bix ߇J8)o"Wrycx1][ԗD6~ pMgkHm2n_:^*g /Eur9JJL*LRO˫ɟT9 K*Y0R`D$/h*Ujˑɋ>xfW*@H8F!, Z$'~Aւ>x#B^H>!PTw2q2FvSZ ۸ƗOw6% +|ʴ*h 1 #m"8Cd[)`bpgy)̦k\mtߵ0ikHg~3(珆ow½Jlu`ݳYb* R!^ڠLi0 )&dre$C "FmQJ5[5JeNJ(NuG3>39H߈%{]J>qU-!__ƧSnU/j=v<F\3Ԏ\i1Os)H0P~ n4R)t{j/~Nuy24OK2L.Gl_R$<\{=2QWEE@(@V|Vwk,7[)C^׺ի5|P]HHTEl6M# SxrMEy+^jnUϽ9,/Wg]kƦnT܄qJ˃)%GaG S4kR6'K)ӱRJ#J)J)0UU4PX)RJ\J"Czk+ex,aĦFcD2Y+I5eDDL ` XyPX팜W'ָr<}F"@]~cœm֯/|v^9J: \1 JjSZ/.F:TRhRma8=RS-떃?xI?KvT{/S06ٿO-NM0|eߤ0'{SRj撚V:WWl'l7IϦMGDY", I9l:;6æstahG,8Ĭ8sbCq߇Å8e'줙4sx_}9/ޗrx_FFBj%rx_F9frx_}9/ޗrx_{6l.Ҵsl~ *Es蜫*Kq,4yDF.mUܳ>|_+Q\g|X^ڸy*L6O7MğWimIȿ{G%ao;7P56ް\L6W,Zx|ѯWZ{qR~kdaW Dس0! ]V>ҽ9]s$ux=SH|xM&79uѰN_}Jۇ/ )ٻFWkl;0zqݸ{;ގ1$^T`o ;.rs}ŎA]Q!'uL5%Mc<%B_Z ~иAq5RЀwq>xL>(}JN:T;DŽ'kZKW}w~hocRд:K^re Ց{>Ip4ŶHt^R˔b?#(H\:2}nf"?)s=,#OO&񔫘ǥ~mE8V#xkV}d;"^&!pnƴ` nXV[hSmXrEW`]*# r D暠kP"DXכb]:G򒔳+!-ī<&2 ɦs LZI笑9 \׺qofuwIϰZ\o??mxsx_mS=f4E[;$OU~EZ}'P0x\.R?ihVgќ*x>njmW`<,ϔf{'Z'QOiòc< UIu&`3&E pwTvHi=t^n׋lO]{J{c~`}G$OU"y/?fx',, d٫E`U8WFu9Xt-nk_KC#5ngyC5UŽ,#4NgoD8"]^ 00( RPp!dԟ6 e@9ubJBB dcJ`L-vDb 5R(2N).pLQX{8{蚎ztל^,Ǔݦ:u,Agbf{^jz>bBoxPYpQ&UIc $cU Q?3i2 SxfIzykȣVژkmdavƋV2O"G1F%[#22tꗻFã ~DxzJ, `Q\{9z9/y*~_'~&W1/% sr`~sDklwӦ~tgŚ|Oؖx}>Si :;W}OWy1M/ 2ٳLNK )R:ز}VJF1%YY^.zͯn<볮 i*c+"sQ*end^ֲCIfoYrUu^rZˀ4`8!8zr>S~P/gžx:.3|.s|gӤnR&v[V\DJ$%s"˄ rk\0VNn0"юt)q}sWl#_~?mбK="FNkzq^FZf?Zfݾ%yf6GGVt|>]'zǟB+#:fi|~)*!ѿ~ ^N**.E97o\VAɤNP2ew6\wY~o49Bbe))/ YOf5!DbOmd1&(%68ƜF 8K/^Y_}~L*ڣfUiVA8cgFOQѣ6;,IAk44Z e$BV q)8aKy eCfTB,),8E5 e3CCb%"maVta:8 ɡBݵk2MB /BpKO0A5;+(* XQhde;&Ξv3۬ޣ%W'mҼHn5ٴ3AbN\IiQ#)`5M.`IDA\DXˁXHZ3 V\N ~~o3I I+@2(≩4@W\ZmA@>޴[`(,C|ڂeI+IE==,{,ER<3yH7JS1VA[ k}exQ>X{NS  f)aCt( oA;I&<"J=9k E%>2qlRP^%>;T0$xy^i(93ؠlyGse P5 ٖ:tԡ_vÎ *o}:*ۥ"Rg ]0rƙ6%'_Ejy,t<SLPxP"p@F Ԋ=mHM e *}sa =T5YӲ --L˳JF;AoA'IE);c:OZ#Q!RkT%&~&${B*E?퇒XdI~b ?R%:)߆E.|wU7m}7/+G4<WlsuY6M^Znhф/9B-]F5IѪ:&S^H*F>JuoSN}>ݵJ$84hrZ$ ***Y 5ͬpD2HgN؇H܈$ 9`2@H8W@ #]݅v.oO\y۫-cn3E&l7I= k tM :3s-%!`c'm[*) JT$Qo;[koNi=?dY^53ι޳to;9neOAa $>>&J<})o)s- \%_'iYR^?ԕPT).~)Hq&㬿΍29LF?eäw桳a$PKJ 0T[uqnL & ӓ>TY01b'xw')7~7G7Xqt,+Xi˱tn18xӲ2|/:#[z`e~+yftiI Nɉߤ 8q '6߂ U>fT.ٽnVo~*_^ΗGa /V~ RwyU,3]r ~0Wv0_~7DLH\<>M0y&YF@!%F0f =YٻM\USQ)~z$Fm+Ex>$Bñ]N{"{O )˝jDpתޠz] ?xO]?]{P}q?Lz;;`违 C붆fC31iso2.os >$aS=?%1A7nvc<(ke&XŽtev=E}V9{]P>TEPP vj@KV/?i# lԊWn'j>\`T=KzTT;!1!3jVR)q4ĉHP2u6mh&< (]E@b.&qұ@~cWw{a }nJ>).eӼ`?'~ͩ:mZڋҖa&Q\1郤qXGhxPVǧہ}H&b+ɴ7DT0p T,WAHH X%# 2zLIp ; S|/Yc|ﴕ{ k%*l>T{[^dÞj"~ 8HhWm)+"Eֈs`*.jf,n"K)BVq@ĕL8e21x!81 RRA;pjγ%Aapև+2$B>sɓRMo˗j.ɾ ×s[,( R6 QoTFM5=_WjA% JXDٛ9(9L,өDDf_*ٷJd9hp.`H؆c0}b@e"fAYDhQLB1$ˬdWBIr.PsA꜂T!Rۉ[gJʅI%2R㔞8fbΚ:j V Z#҂BR @l|<6 z 5Ⱥ5!-lD\H)'QY=G:-}9{K'd J'(4n+n"e:r/pR"zKQ+–j$ .&'yڊV5Dʩ~9δTqxLhM,T&Bh5D ֖6aӲ8;h^ym{ ꚎkWJU X9香o6t5rM:W>\w(=K[J|!W"7_{ˍZ\IsU-;9窹ƝT=oTY񯕃bNdu?5џ 02n}Uf;A״(TKU/7hsx1kUW.X5_ʲᅠ{;|Eov\rrW1@BN?55 ŨgT KQg4YNm`yc9rl賤yڈK֘ v35ƠO`aWF_xB"R,2eO#ߊ''JqMk a;{>h^@55PSСhP%9%'4dEIGX ,QR2\8 ƅ5qF aS^AZuqL <,ÚKAC(i;-p3s4=:yl&jX@GF#\q\zƬ%"DfU lR`A9$JR(%M ^P/a#eV(E D+?`GI9ׁ4{{G4RZ`#`@Ǝ:MUA 䞡1$QB}B5]rN@u \ rb&8h*L]lmٻ6zW^XC9zUzqӢ7opFZ@{ȱe uhW,w3p4؈ls>[/gG]77 (KUj۷װݣ==kyItC+f#uW+f/^?DA,0zoTН}ookƛ׳GE^竦m,+Bi\[&yo]~}oki<߾[TOT}\^Xیv㯄N=SnkWb;1LɴRpHf] XOź*ֻ'\}.=p%9KWNxlR(XXlVXM^}eZ}C hkm;~s?ߏ[1лʹmS=V=EQ=MKIJqʿ]r8T:P* 7RnlK O/Wwzf?DO91YValy#v5e՘Yפ/8x/tFP|R+bo*44g/LJQvؚܐvʇ1r5sU|yb^\̱v7P{b {%hYU& :*nEpYǜ5kɽH!,Pv.Ӭ}vBڢ8Lwn<+HƱ b78CgD "Nƻ1L$grUkmS,jhieHpU 1F4]1x2G(1[4kP׆3$12v,9'QI-3"vyDV'ma1uvӒqT_\܌7ℋ "Y1B E2.#euH(8bZH_ _ǂݴPzĀ+ ӳp'$aVCY!xjP`a0WcT5O +_+<ۏa:^4cUՏ"<;VJfǒlJW&kљ(4:ZվOB<)[P RCW!icK `D+޶mgjo8\S*__ܺ[sߗ>t\i}9};5G;|~ԲLefyMb&kNSw*F_N3FsClqkzji~yi0:FDTGt]{@-]7+XRu)6}덎\(˦)Qv8VMγ1XjJ&pdkM=CGTժ.Xe}9"P:QuDU^aZ/-/*LpRT ,Dd_076i,sy2h>sJÔ(4Mro|<  Zb_SQJtY@k6!*(ZzxEUeF [n9)-;9j=*_;B`:Y7q]O`f+E]>rx,],JGSReŶrYYli2&p70E+Ϣ5ۚBrCDk(,CrZV qaT+_Գ"L~qWiAUcʢ`qV# *[lY0姓ך } p,b,Qt⠨ܻqrCMg,6$S-bFV&,fZJ*/dV~1bAU&k#"]_ZˍM n*rQg}Oa A*%']ĵ3Z֒LhbXd㎌XX7 9:|g&-9v)6o""-6 a 6y`rN&e\GD7+$ tԿw;QV娽0 Qa0z˕_l q# q$1PB.eλl [Ak P}QǘjT6I9[Z ؘMAU1RUū*CȣS"ɑzRz<!"\m.w3/=A,j5>(Q)f>lye7Eq$ن'XZAՉldӍl~O[ڽu{}Os ILuNLIC @lYbAx̵VS>"Q'SC3jB[ 7BOvtp: /ec*t&ٳ'ey\e(1;K;mNX>^\ݞŧa<'aw⯖GO}7eqeO5cbZUk|c:W`K:CVZ:W#M*tyl0J`=RkSEQ9THq60 .ZʦVr)H3K!1i+iQ%dkD>\7qOJq,C7Ka6܎ ;䌧i־["9:l>츞p[|Oնu9[eQ5e ̘}P $%9v:ecmQ^Dg<}gqyLгz&ِAG:7P$yI̐s!m̀b^5U y O1Ϛޟn?X 18ycxVQkUy94v*jq=Qd\N|0ēw0b?zFEgwPNl!; xs7Vw%g,'T-6$ZQD*$ (PgEHTN0r?(g.x: 9(jKp ÜM%EL1*UVU{R ' װ{:p]rqZ"R ]-..Z0ݡUT?'2^8'D.8ŧӓ|YNx5NnDPgu-:K`xقO]8kE8PPUeSgopV!Q|]dajwt8ctRo#>NVwr5=Fn-UpmU𮁜KUd_0é1W/]!5i!/1Ik.^Qm346=|{UJCzXֵPASF^ѷvV#oEb޽0> ZbP^I>dLыXv s. PKP{[%ΞIrwh:vh{m7bEe= ȶ$VSJސ!b_jRyd)uyaȳPoVƨ/k{ZJXL$ 3UhsԒXfyp½W ׽9J1[ 5# L *OϾ8cG>DR&z+4)L&2ϓi')EP.DjLd*J\r*%Tx@t<RgC:d9 AO_" hIzW>]J:=042?wGGtG~^c0ޭ ٬lBH8mpWG2-ʐ/y&f'ay3mayn6YT Bk-_>~+QHiyX lm*^܁-4(0J_Sj q4r=: }wW냳̫ޭO=E U|#EIG^~`?.[etjcU>*x0рZ_=:ZtKNk>WThRdr%SF;Il!`[ZŷN+{xG)'#Cme SDmTO)5P(Uzd FB3shx;+7KwvЪt,Q|ΪSA.>g%o]6 <9??}s&X324o9iQ 4ޫhjPM?H`Ǣ ~fxw>vȇ  ^VeQ{/e"T!l:UG B]2m &IVO}\B]bjo֞oeTg|z$}]g縎[x:]4<>{5uo^z|xn4R[ߞ)痔ՕwUӮ>SīlP[TlQ#iG]tT)'TjR]g;_vǿT6C=9ըI8R(j0SP-zp{H} as0$u"LBPZJx[j MX^\}iJ݀'-'^])yS?^;cם_08qJH_5dETbJuMr*]4w".fx7Rݹ}+eu%{I]|*}ㄱk}ܭ;ՂZq r~tEpr&e/,9/ًWգw2Xn i[oNXkWSo\1.ˬаlh6 ,ِ.z_g-ѓ/z / Sx j]#tyeI5~b s[XP"2 Kl/9V^rpj`Qwgo)bQ4tګ+o{={:C4ޝ\Me!qKob4pspZክԪgWWl׏ܠ \Z+C+RW!\9'#+6ؘGWl ` ;\}peszW[n5$qytelkn[-:L̈́3zOios6s A{9`Vl>}&="6ďs Lш`V?xV097W87yPaXCQ,8}1{{pxw $b 0-po(cˉAgtѷ_t4pX~ܶs\l<bsK}7wg'ۏϝZBڠ5GhE?6+r|3?ݺEHxA{_aoBKC\zX~meO=[,P\ԮKؑU{4,E@z\wp8|@[X;毙)o{Tz\úm6@ZRQo>G [,*IZ*2[Ӭf5-,7<(=H#>2X Z^G^lǺ|~p\+s)ZŻZ d%P-9MNVym)ddF4A oؾ/bǩZ eZXr2ʸRQh4.,|~mƜ-ҸI#ng%l;PֺZcdx{4tobWmԭR :KʎZ z |rt'֌%)m]3BJᨨf*LF'0)SS%*D c[RoK7P`F76D4цHQZ+ $eʢrQXWa -H;ܲI !SnФ4IDPQɔ #0os7)Wt4Fѱ]"9{& t6 .jwn a- |ReosgX(O@ɐ>% V< Hq\ֆ GUNȪs#s"( z[4hZ)*!w&u}dSa4x7[(c} &G1Ƒ֑~ҲcHK\0S;&;xVIb9kOeI&l.-6JR`ƨ%gE2ju&)(JHJEv)I6$2]0A3SK.!`, kNC@|k< OX "Y0( 2MrHFD yNQ 5%xbG n>TKQvdrࠤ`g<"T7,!3ФΖJ- ڛϐpg2BQ".0̑`B(8Գ$; J @dJ@!N)A:͠lB*vNQ2Z#qߓ. PD˚ AK`#/UV RhH 2RDjJc#hV6 H'N |u9fV7_&M'2Й:nֈniڋ q)V)T'2S; ')! /0,bv}=w.>m:# JF%J-G @˅Ue c $Hv9kMj,$tB\XK* <@H"92eNOjEp ƒCM&qH416ȉl\j%?zgH',YDO5a!G&޲H@)BM~VDt52ߤr S5y+ùsZr?2é FhӸnzv()it\$T#qpqu3(y[Q\Cؘ" n=MwGAEM[cV5 UY ):9Y%5o)O>) ڊIë6Pڀ@xxXB¤a,.&E=`eT\H O8%0OrR5] =FAX2T*\*57aXg7vҘ%E0]X9e>N84*匋H0DqS'8 9o*еOW4"򎊷V}UQnԋB{N6kq֞Thg``n4yBΝM[# K8{qz#?R%iT'k*8frx\./N^WviY=~u|f7[q]rJx⎻?IMl?2~x<Wl$% `V`/ %\^p zU/W\^p zU/W\^p zU/W\^p zU/W\^p zU/W\^p zU/W\9gQp9 ʁw>`WeUO\^p zU/W\^p zU/W\^p zU/W\^p zwmI_!e7%@0d؍!Sj!)idSv qUU5.********zR" \|W)s+I8uAU\U\U\U\U\U\U\U\U\U\U\U\U\U\U\U\U\U\U\U\U\U\U\U\=% >#,8y6+$WBj la\*y}[7&փJa_ `E|'be͆Łwt&Utjz)d#Hş%`trpho}j(c?gu<=͒eoTf9eya }F9KU788?#l!rW ׯ^_L">aM),{mM 2(z Xlgs&RG*9+Gx a7uܾG0Q?.=Uj\a:9vг[yA\Lʹf W1eDZCnͷ—ߍG>-l8'Ҽۀgذo\9>~gpbٗ!_v= Uga<>8kMGЩgka0TUqMhW bB`1J4m8?<>0d&v˂\6/L2K[ObCu:~yn(&Ss Y,Om'ڰm K9S3oNh8;Kk9[t ΗA'˩]4:)Ukwk_i-,?f nÁoP\v.UIs&-lԉd@<">zˌJg„I{ v& O"g}a5oIԜ;Y`F&%nIA9B(pS9"C9댜=嬑|b0.FǟCrN7 h|' yOBIQ*˄ӌa[VZt)_ z= `)1Vo=NR MP022XT , :>mS+LmԥI_~G?NGV%-;Q(]3o5;(UV{FWFa^ ET %؉rc" )KI/L,1I#F`ޘ٠vv''dy :9ݡ?U{3:;v&M>Ǵ}"Ƽ\JX0t9e1UDJ~ P4^L*]Dh`'"|COw,|=ă,5G00H'YGKcD$eb!pEG]."݉P3yt|A}?7wMI6YZn:v͟n3it_+ͽ^.10V>i>_JulIyhpETل"Ip̰*L*aE& oObwN˰gr%ؗDrb '.FPb(qY,Dbk T2ʲBA(Ze)Q`?yTa~&ݟ0Nɚ;E!q{E Zɤ|7wv \.~0Kհn G11d&袮ּ2Ɵ纋眰"Ix**Q3kI.7jl36MzioV7|+ /{ZϪ߇E_AƵeLA{FCX*Qbb#łG*ǜs&+b|ݦwQQC3rv  Xs[f<' 1y95š'0SR}^U\wm>1I rܲ@m,#3h޶G4's61 tSꎁnq;`ŽyېŚӌDt%(QFAڕ[ypҿ7*v\D[T ɘUŽ5VIm3@֚&(=9J웎^AuP<]bes:ȳLe츏!D7}˹RTr-3cEuC>Zgz6\g8 svg5(!6 J!mDqݬ8'^ -X%fʬ1N:I1 PUnVb|Xg|:$U  ,(QGU'T%g&5b-0݀ŧK2F" Lpo zg\eR11i5n>jKs$-vEΞ 6=-rG7C~w"qw<Zxd-J͒K7\s"m'NGC(x>p6S)MG[b`Z%5"U6L6T82XTh$%JjkxW=IuߕyPItO\/rꃦU|dNHpg$I_hMJd 5:KeYgʞDOEw,R+MXD;--B"H CJikA ?cw!f.}I3(>c0``(!m1)Y͍#4k r_C{~66JYa/iotur&s.|F(sw/109yq;ذxD]2H`5'solj?*|RGP6_4`X0J('h9F>JI ru~btĨU5]%XW]ezv_2 '"GaF0M0n8#$WT7B]Qۺ7o5 S6Slz~f8@ dgӠ|e]Zu>ysSJm$j$ζ öaY=aLQQGjГ՘͓I7ݣ.&mԭs,G`=-$>?M@{FF--}G xygz?y_2s5:6,{4 =n ?o~Цa܈C Bu+qo+iAXq:Lf\-6o= ^.TuWzs@-㛎vPu$ gQ!,%Z߼:N-q/?is/U?)e0H:]ʞZfʂMlVLEc(~8,T] :x 61+5º$ %1& 5hr^:hO|%w:>γƤ1ugYURwPL' ٨H0{g~Vm'8)fa\Tzn8N΀qG=o /fO$BE|tTHR6yPՖ1k9s0%h5n(-P魴]٨%E2K zQc2dcC`q>t+nrԺrDy%prqc( Dh,PG3I Qy "djp17~s׍׫p>_+OSoRal#zx+HM`KԘGo"7z{D޶X.-Ҿ) ;wGukamy-t ax9°t 7j__QZ/A$K-qyUOGxtų`\4)OTwqH_Kio l=`Ȣ,ɒJ=F5i[rwdWGgrq;3Tl&//5Wd_A߻OC'`˟/1|Ǔ<+8߻Yyym݌h@~=Mvl5;og?\]ki4NatG?x3Krj8F~Ջc[z{x{lXbvACbP#y`~0q=zaG־t:dINc"|NC.G],er5HE0w/uqٳM7x}lpt?+fb7.:z*t9d՝}9K??}?o߾M,E yw:;gLrsO?> &Z: pLwR]"I"<>e?vEd&B'rIIl}t5IJ15HoذiPrƣ֢&aܲӂzIIt :L&մzMyz>`TW*~ƣFK._5EȝyZ%uD9[RP;fޘ2X`McI]frs7uul2eNe0ִ?;7z>l4owa8X O<nߟB3^/Uɶ|; ^o=ٛ;u,yV6`#-} 騦ĩ&ntur:ۼz.-s02^טNYJuI2RFFRS| Ϳl>e &_66e 'ջ;᪋<5Ҝٷ ò>u*ݙ(È5Ĉm౶౷Ѷhie51}j_ײBe#hIg xP[+6&zʢT4$J5թBWynj>2l֝aVhmV|Rc˚`,Vv_m'RIޘ^3$s 3D~ s[>up|otc ?ztUGwrpxp'&K8EṂW OKמYNvzi$Μl;"$)^c9q`{uB>`rS:`5xˈ 0) MS"o HF*Bz"ș%K.tΣrqglǶE__~s-{vYCqE[[z β՗kzήM߷Yd]d޼MS:q'Wvf9MK,I0 y1j֡smO͕& I=;]_}`m'kO4K^;w_ޚ5IHx2 )iL'kG6֚75quVї"&mH$wBoh jmeR4{%Ȧ4Άw!>ikb tL>fiP',eKz@%)uLi0/P0sXPaS VD;0b$'T^%EP^@Qf#]@U/]G>FcY{n9z5S֚2(]Q LFV;aGk$H8?X.'xХV=X4^<酳aS8oIǒvDg^ZjtCs*L1.]ȍ%'+M +٧kӮmKn\hgc~5YڴXEjvZlwf/4W--՜ݽG5 >5?WO"zjUSKn'ť٫_N?Ϳ9V ǓPw@IOO/:|{[Scn)rw!@!-VثS>R7>EcJ2|.[eGҔ$Z(0€^+XpsZ%YM9Z\qa$I#duiM9ZkݕG#/_!w}c-fݺ/n2mn~z-l.H 1E<ЍE઻8@ ]'!FKFeqE9Kwjԗ%ɔ*8[B(4y:D!e7_ Cb։=<=eO br3x0b]*jbGJd*Vg]qI\[7A2ƒP:րl )$7D[mFEHYw+*]Ջbv-s)3?">!.z~yrvɹ> nj7p e INd˿C^GDI'-e_r.xrRR>@#P:&6R-cll 4$㡶7~;jF\xIF _}JO.:=~b0U룃*T V\b K4Ժ‘ojuԌLbVB#*'gjT N('W|hlug ۬;aO^aL>jE"qĻhb:z} b 57:AAvˆ LF7X˜7̍-8QJF3BJ̤a6IEl֝[Wli<.J.N`%Vqz o%>GFfȌlԖT$Hh*kE4ŝ}ͤP6ap&agq )rWy?%x-{?K /GWwbwZ _wT@tBy1.ΙT9~;+Uİx)9 zAB##") 8 |aRi Iǀ$fi$6J2|2^V NUw6Hc?{SwS67p|Nh$zŤ0˻wR-d8g xJdfEJ9M=o׺q#ݙumlTZư^+{~__fG!,nebVYS%)KVϖT> W*! sάqRGK^˭^t*sF%%% XEKJ&zV[AלĪ ࡠ>;Yf^L Ew$&d2UY޵#E/{l|_\f2`Eh"ˎ$1߯z)[ۖl$N7,RUBI"զj:HX}>R3ҥliRie]/qLc%Av>hTsd$ ]K84 0%忑} πb"e)`TD]Hٚ-v"eQd!9t+RNJt B`ė6N)LջY𩜓WcUݝ28Y?dVgj{%| vG0Ϝ ʩQU_>M_ںeThLlKVޟcmTaru>LlXi.50|E&OKhrgmH_N0?Nfe91q[lϜGyaY͏fUB'4D+_/JUs3QnMgq7*j>ȑ.bIaΓ0W 0u(-60A"Kyc;&imt:ly&BX!`r3c `kZ\!d^GZ!u:y3ɶw&t, QLgu6eP$ ݩaث7)0x;q}}w^?ͫYdNmկ|NޟC K`C 1SՄv '.G~~ҚEIiYMY/Zɬ`&2تznYm6YЫ?bB[~/-%{iPmg$/ 8;˲ȱ skw6׃_gOiw~k,:X|Qّt<̌k`ןL;A綗i΀W'χՏ:`՛zy 4ɰt32E-r&yPP?6<*\r<ݺ/ W{>Η]Jo+b+ ăA _W?>mv &jB9+ JɃoZ5J}d)Hl=sduILY"DvAzdHة\yM9/rUKVƁTX( PNZIu(Llt>?_FH8Y\Q 2yXJ>};Q0aGS)@Q1xV9+M* ,PHR-)+o';GoAwiw>6aENMnpAGtVs^`:۞lu/BXv/_hmJHu]CC 0M$M`xtT= Ά N4Qx-m[ixݦDIBgk()pn˱= w⬑GAz&` %Np6s?rtlLJiK<&8/GrٝE+7mcxxttx2g GHAH6Lu(b1FԳ&!>(<$ £AZ)WF0fyײ Z7|K ւOV2F* &(jCi  `IF|УbtG9.]:vHt m$cnCq̈́W7` +J0N!t&C@CsWt=#AkxԀqF$Vu(fooQFa"6?8'HVz3PJ h<#I3Rkz&Vbˁ S>f¢k8([ V!(BI,my"B3q<:og;\n{Sjn.Rntz0kscBa914$SOhp)j,bU^BY< J)HS,)m(4`z %' tN)FVDוJ;v g==O˪{OV^v6])fuVYnYj({fVڝvdԙSh 6 d{ K_g=sV昔ڪ"m֠7EjPabR@S鈐r^ȶ(G5(XTEtV%[ Ee;Ɲ&sW|!]MbǏ[z}fmGw]3!omRed@Gק~R1?ܺkft&k|)5x7JV?oqf٘Y4˴wb[<.Qߐo4_4]7pyY`I7Rren=77|B $bqrO%UmOxU^ȄWkCϣscyo^9!;kRsK'6@c|0 Y 6hDDꝍl[^#^n_A-/F %a)*mIJDb BF06&`c2uT=j|^XJ ihETl. Ky}BT27 Hp[%~I,k^aꞁ21Ի7X'@[/ڑ&͎u[zWۆ\ms`b㟿<~3ꀩhBԈ)EK1ebL7I~w6Uipdڕ8xx3T׽IH_OI$Bt`QAgڑԚ t$ 444~ Mb_(iL,^a&d2Th\-$:JRlP_P/WrFD1bc%ۣ^>$i +Ea 2)](6X֎[Fr` 6 ^YgJrd#!hM2"!XؘR&t4,IJ*U78zw܁۰Ӽi|坨 O\(aN`&ߥnXrR ϣ i[qԷVy/M+P}~fAh`Hvr<jk.5ӡ7$0~lHBi DCSQ1|꺺u]\+qnWTr&{Jfv/~ur| <:i :剼Kd)yNa2)w㤢V < #:%BR"ujw%o}.Z*6=|Hٗj(,laΰJgqW[:j /WVd")'&i^cɟ5hp8<2{G!Ɗ|vT@PQP fj’bk 4TY ҪMm]P :+EA`dK"]WJ-v<ss[vg=}01fC Ѻ,j+ȔSR"zokF$_r꺂*Gǝx#2#C %Ƕ&HV2pdXbHLaT;v{ؒEBj"v>jC17xIǠY1_RZALPC=VzPc;"Rf :`nL V@jgbddٻ綍$%8RWwle7皧E2$%G~CԃIA )G0|t=0<fIME683O /NĚYɋˋ:^xqSb12hISIh#XFq4eQ4zXp_Tg~LޢzXrۚzW#Dk&ZP*»z!`Tڃ+\^5W:³&YRi)b#!sNwُ𲃤qeIftY{K KXfI`LK)wiyO+\!O"59䌧HeǨL=2:pBM .)+IQ\_-yA1~T.\eO߽c.Inrх~j8OlS3ynt3זPQPkJ#&Ws)2b>UQS놧淍3l7ۥ]_4n D 4Hɂ(xs;xV!Y}`*M7h-M!*'P+8O'q , K)`iN ug}D *I-Xa~g@!y-R6(&﫩rC9N;fVIsn]#zD9g"Rq?3a$ 46-W{1YOX &h$#sG@ 2T @c (H's$t8S`#j-ׄ| ʞ`+4 RLhgyzV-ҫ{jV .AMTN T!u5$䈻SXViFRDZߩ@*pRA[3?@`)1VoZ').M`A&*`Q)B$0(y xݦV Tv~o"`<"*Ʉ'`l&؉I O׆$z8$i[LWSTuM-mii=R r)ᅹS)퉥&TS&fBUmI% tcR&"*YcI d-LPE!p8cKNquQSce5*YXmvüEг)N 6u&eդ\nyE*C#N ]p"?$hԲRN5IcIgR +RZ r_gj"XXmH6ʔQE E ֊P,[M&dǜes(]ojp*(\-$6 JWHZc^_&7BjA.8nRąpi:cs5?bHGgb>ɵK5 ~n>yXGsNDhNJH,E4^d\|ƦjRyK5m;xMTP£Y<9hhHRdCѥ oc91>zCę`G7A0-d)sj|LhkS2,95@sؚlYf](Ǔդv .c-LAdzeWV'Xm8:6Fwwھ-7㡆gx^.ӗO_8d&xAjL\n8$Qѫ\`p0 MfbG=c<*AxQgQV;4&!Mt PsJ cn[.~ɂXQ/_myn%%CqHZdvdrCj|&৮nbY#!xQՃKU7-s(U &3dE[-'@ ]cyl2}dL˂^tটbIt btC>y'] x?_X/>2E5e^ /h~񾧲3hSE2%߁F% .vzD+}(,-ϼQqr%6+iYiP 3$u6X%rHu_j;mV$+%//pB^4;] (u"63g- JdqCop' JQɵ$3cBz(!J[Zl{X>sUv\s0͌]g!& #&ȣijx&ap|Bkx\Bת.jW dU@ѕUS8&~6͓V|Hp '`A\ńp6 ]4bs kW[8 *nXso+Cw~%,jyWd8ܠ|l LU@ySWuMVPǓ9X~C/Wra | ѰM&97L@YߕR (A-WRvZ =1JA))-{m2p嵎^L˙ky::a9,䰺;Flܻ6;(#aS?U 1͊c7;*F?7wp eb f؇,̙cMߧه\ZSvMҴݩgK5Jˠç' #EiZTCLTCH&jRT{FKuZsDr Q.Q']y'x>` |#ɏ)&ټi9GB ~h((/Yl>h M#`M[Cװ44-%kS_Gӯ5䒮5xT^L0(gCt/|Q8 OtiǾ;30WniI("Pl2=UѲt:-s.B -:LKD Pxʯ,Àm)'j"R1 5~^raO~{`2JI, 0'Vdt7ȏ;Wf5+m&%ܗYUZs] 4f}AQ98Z&*r)ނ:K`^'죰6mGh&mhMhh^GRMtR+jh ]!~+DiDGW&) XWWB50]F2FUUUm?^db '+Xӷ;To6Eqjmazn[ͭ-f1.5܌pekњ{tѱ/[ =# 禫[v+ȓFݤъ|wDzt5o>COfJ05tpm ]!Z+NWR4BGWOCWkM5k ]!\KBWVPtBWviutZJ+(i]!`kZCWW2B6e>vtDtEW}H1Jz푀@^nV aTk;!Հp)nͺ5OTCZTCTM5-[[WnnYRVS{Pp7q{]xjI-tuد6le{-jitj݀p9#nh3e6 ͨEt%i4K["hEDHGW V+$iuhOWRv_#]YfiS7++[CWV7>Q^]-N|]SynpyJLє;Ta߼tE;w)5tp9m+D+t Q6 h7ϝ|GOAtheM+Dٰ^]= ]q&^[)\x[}P*їgZ Lied'XWKcfzS@&͖&+eKY!8#9Y`L[Bd#dOU#rjZ$NGcYERi-VX2&uPJxm GζxJNΫG;g4;[9|e=;(|Y3 =7?>PI*{/,=Ow=8t/n'rU2tZXK3v>;݆ pHvA1JR~ EUɷg\ _gn686GNruj=:ߵVZZ꯫Iݨq .&`A*]*Kȍf2 7yQѠugiS%@=m:֔iNqij̈S+SOű M֑EurFhNbk"Nђ3NB ${+Ѥ3xԾ`:m Lf‰_a{V8 $NDKUW8% p ^nI~Ni:<4ճH+su>t#,7@;I% PQ*XYmFW:=c^9vM"h1V~PFGSVGs|hQqcjPfp)SO\9=u^3i0>Pɭ[Q=`^SuA(I5AfTgD8xzkL.4 hyg%W=M#Mk.KTyhgXK6~ߍYq>CwtZ!~ȊӋ׿^#E/mHoMIl9g كN4-)Ii +GpM6OVv?CQr{4ghɭ005"B>-(jlL5缴g!Zn!JizS M5Dv,ٙ3=F7eLvnPoLesXzQNjN*},u:hRx@bfd7 g7kM.v/|QJ gh7(mݧ+l͆.WUOBwsΑ 4+1_qBݏD_?GPS4%f<gl&]+@iMvz~A4,NZ fhּg7+ձ]Oa$#\l * ]!ZûNWҪΐ3ߍFG[tpY6th_:t( ի2]!O7ˈʅ3+%j S"+L.thU+DYqY{:¹ 8WP ]!Z.NWwϓO8k]˴]K[snHpʮ25O53hzIm \Ni$ҳ]?j'}xlQN6y9J mNzXp'pE6zORޓ;CON2uF+̔EWW\ *uBtuteV\dDWX|+˳B@4'?GhS6e |2hm3h=]5^\)=џ+}j\{bjV XŸh@WcJ!>IJ ]!ZCNWΐ"R j ]ZMx Q2W:}6#BBWʮW޺:GJZf3+!:BBWVu=]!]IA[kw)dx1cc"/5r VpYdDX<ެS( Y4#wuRրUjgt< DF:~` !PmWeDhm'ger:e#Rl j ]ZD Q*ҕ:IdlD6r+D+h Q*ҕ50k#\d*5%'dۡ+Y\6: }bjV5Q JtulSK6'_nUYth:]!JCz:CbV(i2+ɆFiTP ڲ |+k+MK;o]!ʞΒ$gfDW5FjDt(+=]] F+u= |X4fNҬ2wѤblN5czQgyC^}/xXKNf!R  Xť[зp(@ssY?,4]õ!_ʸvp-|ו{\^?#GL2nhRf-B W1>\|Wp__f'볛܌?.yqvay?<,uh=O3(M'NmDvLAT6WGu׋>x3tڧ6,(UU`w刺ib2G#j-.jM"KRtLSj jSlrrT+ =;Xih A)./j ꠌq$ΓxKeL$,KS`}0ڞ ˺}frV+ͥA`,(tݗWof3.l^dJRJkU'~*VWx[]y82)rW{F01FEE.*4Mi5s>3 [O}ėChA8( )1UIs3 +.t"(0Vz&">zˌK/2&qAۚnߞz6V ZpL#sG@R"Ba[GR uPQBSzH5LYB5U4a0^'-!BALRLhgyzzVտ߀:Х:*T@ B4ÂgޓPDXT8 ^KQJk}m[ Q?@`)1VoZ')NMP v+EP1-A 6j!̺~|jU2uA鮜Sծ\?Φlr7spvEhW<$@-N1%#)7&Ҡ cI2酉=qB˴~&mxcW>z2~Ga]^>YjS,ʌ4mjC]T7 z^H\6ckʥsAH=DJ2R4^тPk[{3"f!t+ך|e嫡LP<TxRhi d,2 J*8cZa8訋EҫG{3Ɔa~+њe08 zā5Of G##;KTn9'ZߝǧVvٛK]PE?@ fnCY,R6a Sc|eP¿l`Tpl5Y}} ˾tREπ.W%0}A[FP`fJXjv[5~u<]B#T/ ֱll-n:5e;@c|>UbLLѱlm޻6+V•"D{zTr:ǴEVl\-GF8}$1ÆId(HCGD `(q@y& kV#HW5c^(Ўr TT->/ΐAmm!奌,dP.Jp|*xi(sHM8q.]֥H ).BFx04.ٞxkpā3+K;I=uudykC 1{L~\' a `g^k6gYNnUo0߱+tQA?ĺtG|"71緳abZbFJFМ "q(CeRYKξt۽ ׫s/]mXC`}ϲ_V < D5toZm<(eSg:H1mF[U83‹0rgzb8'bFpI`$烉+ iR +UNE1kW!FSA"/ 161 əy="YuyHf3b1iƉ`*ɋ61%&Ȍ% ^%|RW}O<|=yhXDi^f1/ 1qX !H*|R<0.LQ?Wz G3GZo ND( BˡQo(38A0@ #"u굾EX7-l] @52&i -Km8*yyVom/.7 $# 5Xn4R +*ԃSxs?*Ϊ1%K=&}>o-0?~Ƥk慁-Cj~릇>$ia+8e/A@dY؎ &'rw)؋{% DtTWS]|tޅcb%"wP NAsCz2UӳǪ_ +ToaZWZ}{UGmEzYչ$ԮVL*2Uܗk4t3ΦuTBsӎϋ[Q:'ͥOᄅ'h珦]Uzv7'}ȴeO姜t,s1V͸3jGÓzC ozs]ÛW1M:s<lxէɧp$7˵j.۝ 3|wߟgIEOK4S=_My|kI\jpC`B,;V&YXx_tOrӚw?ޘ%?gF,q8)ѩ?޽ta)f>^o0:Qh?Sn]bP50-3/me75u (H>%`LmZT,؍R2 \ 6p4_31Î$: BPTHgZouo7m?hVkutIbV{%3h[uɸ2D .ɷc Ԙ.n;y3DDD' 54ώoQ62 '<(s$H^z&,QA"J!fZ)~L' P[V') mp2KD4l dh:ckpHiĤ\cD8, .?! ISdsqIG|{o`lig'5 v/5݉%w+¨Uz%0|k5"5Ja7.|wé/ cCsDXSEɣ}nrE~ bUG(Ap eSDL#3K۹>@ ^.{ͿJ`Vtb Kdyy\#r5LH۝iX|O=oV7\6p`:K0/JYtrr<^* ,SB#M#pCqŘFɋ —1簅T*ϭ7[ɼf՝Wy}rJt/v7vgR= 0Zqݳ;Nd #1|UÐPrYVЇnQ0Un纙te{~%W]ɨdr Is`.u=d >Gիl7XT5 _v˵7|&}z{SL^~| v? h ܛ#߃CϷZkm! f\W{sp=]>%mZ. J?~}?.y\7G43ҕu׷|H 紇-WwQRy8’Lr! 4fHĭpmA'qa)p)ZApCʦEt"F$Yƫ'=yEz~GÞ;a=21P֊i$5}L|+jF\'˹^9egR.;OK'ם%juq\wru(mκ0m7lfjVx%Rb4wb%"@ F/)E 0Ey8-e(6t, .X6z=Hi /291T)b+t@HmOjRz\TknXeWe=!OA2bb5?ִR;6RxuuKJm՚&s⮾WEw]$ڗ6 d=kW=C:^5hZIu=$zMjF> i6ZU/f+=TRi8ٙJ5Q^-~fPB䌂.`Rb STR3N^hoTyBi05 dI"9 }cJDACᵪOY |n_ކgFAwX wjKWaKҡ.zLr[׽`Bjz[ }TLӸ 1ty#0EZ0-Aƒ +L01263XF64CʬM.Rc' sRqs)7E1J#"( -*Šk%O8.9qܢ'@w蹕k{o6?fߣ}8l%\[ #!5* ȥ\w@mj@ ׽-.]PM@b SHÎA E.;=X.IQ."+4DLRBU`N`f.J)9 o̵N$2D&u1ZRg #z!Uxd!R&%=Ԕ1тFQ4`-eLD:-ָPgb{}e h]|вŒGUqW:|:`@YtRHEJM*18" G{byWӘG"h@QAsglrD`p'E2Wp6r6z3Xǃ^oWEM='u(A\q@,9#4qnW_| Qa:\0Oya=hf9:b:acYaFH|&ƥ#`Yte;e&!e82֦Dg8ihnDd;%rd (I*cf]zABUN`|>]spC .[YTlcϤ 솮]x{Qg]mn۶@Z҆=wXo":N d.&o}[j]T3ۮټQUnvnzoYo 1*4⨐ZCuH9 =TuK2v`󓜗K3tp-a¸5/8X'bm4̋bsZʄ2a^!X,*?3VrR VZcq8|ݩ#J @a N5v`O)O;l 2x BVɰp]{B:?\! ߞo_t ):0RhtP¬4!MLkIb]QKNŐiaAŎ(jxT~8!,, r*Ű[)r QټP٠?}(OTy^+Y} |`'uU*ov':`,n%X`ʹc iHj]piM16!dJFJ! lG S\-&x#3\ ,3AH喌,*da6θ,e<¹qҌWD T7 ^]}H|I!8v#EY, vn>vb7챝IWdENXQwԓmIdÇd8]]}!)/RuZpML*9G\kQ0:+m>o*J5^Zэ6䰥%͙窑E?JU|8cǫyb;G;>+!悶jI63GPZ2%C{kN]羕#C(Chg٦6Wu.*9]ONp!*ɠP:mgpŠ~^#ƾ0bgq(#rnj##%wC2M gSSkmSE1%IxU1vK8G(cbWgbdcOIFe4I2L1#vg{_i(x{OsYɁ[^\}ȋ#/n2Q+ >W2gQF6[t$)Y9@Q^gOPyYx/|ؙuʇ1a| aA,S)#7k)X۰SEW"ū[D/mtV9v神}a5U`*Cnzyg0UcT5=B&N *Dml3"_o#^V] 찟o}w*f(CqӞv26yTr~˘V HZfpSČzHz8t0)F앚rPm2D?^<׿3lDHͶpӝxX.Ǣ\}AeAU.hcLD٧'' *)2K "|#2$!K^%Ψ<:’5t}mWpvR]6]%*J]$xِ\$)iFzsexS>/>Z.Iѧ\)P5;220+@e0*SLn(^[""D c2rVV 4Cͬr׋qOڧONS4X6!Q%8gX''_ C$2YE 2 *u1QzSDl7,TXߦPӢ[ʥI;$*QGURd嵧*C(D:PPW Ø޾7꓏eC"ra.A1e,*#WubQܮ(%m C(>#)xv~AymjM~a2h[UۦۡqFJLkJV'o(Lcӻ3Qxڣ~ Ct~(7p]=ʍtuhQ4 j CW Е\)]5Ftutq0t%p5UCkzOW Nv+]5CW UCtP 5˿ѕ6\Ct%(xS+o pgռ/Ft$xWқ:,,uJ?pr'ˋ~_Y| d }l/URsISs"jJ)`Df~L}H|q[swyDm'1{8ۘE\VG~R߷M^gO&~hy*_IwO b -SN)IEϋ|_Rƿܿ\Z:2֠-*l7E5oT2X_ՀE d&Pǿ~?ϴ wy*R_'g̱gg]~Lpui+GݻN^&_5#FA-3 ѕTkp`Z.JOQY4Myo!߯Ly\2u>~#hm,Saeϭ9W=wYjs&NA/=n\Փw®/7ؘz#}ja0:Z1cbs7_eߢ'Aj^lv4迷)ʢ4W&?_\_˫ƈ{Z\C#㏻i g H74~8CfPtCC{?P8#} {DW Ό寺@$jh;tPZ3 ҕGjHIps4zOW+FG4op.DS6Zq0 tPI{~ͪ~pj?tP:ճoڃhC1DWKE ]5}Nun@tpe?֨UCٷ ]J;eCW C]骡t8 :m`rPJ:UC?ЕET7^[gW\I꼃6xGe`ӕ߱ZӱWG HCizAW~C<T<jpI Z=] p+}0`JWCktPʰAUn0tZ ]5Ht~cbӡ+dzHt ]5 dd?`G:AF(w.[{du&x@1t{Qm~вA(< 膆~V)Vט\=aNCHW'HW^fӕv ZB;]5nTWHW!M" ` lpi0t%hV}5+]N< 0 h5(jh]IG+' g]/΂]]3Y^lJ,L\KAXwO/O?ux/bs?ʥ_nh/W5Zl?wO(߾Mvxc,z߾klv]ekq}~- ՍBʿ՗/1e^,-C{z'{x_{gG\|{vvZ?Go|1ЈRMzZn?y޷ؑ 7 ܖi.J_LMv{n_my4S0F[_ 8oruQ@4.A0(ɘ=yOV|Fî#7ŭtg7%w կCfz1=\ NI)2c5FR5:7VٲNo4A9"/FeR1*d%}R1딍F.̀Tv!si;*6RH }ia5.Tb5gj1e :T"X,9IJh=(uJ j|di%s`@,BM`E-(} 9]]^fKZrQ @f]m`M%e !RY|RLˈ\v dgřĚEcVKݘt9&U<CP2 F䊑 QB 9ޝ7BU HXR(9U+I U>c.o5R]a Mq֥kMImhYV.*t)$ I%p(}4L]$" ʸ80ɤdb@uP|OA `|QHVǃVH "3hG[m`QA,TG>ONXŸӬ(aۂj2P$  a5?[f{c6d*,hb⮆XMB'G A]rK y1h"!]%@_GW*#Dꊺ]LFQ{ X|,#m6T B;k1#, W#3K:vUM9Ռ: hX%Y֨4jӡh$&e#s4qإF t% gh  e "vQTDhQÕ,-`Qy7 s.CpYT2c}9|*i5kN_t,o3,4MEըTf)4ͫFi*U9uYE*ʺkO"D-Iρ$D}^3M:[PX1sɺ6`-As-nu贜v#Xxf]v9]˖sMLnBe SN((j =-`5a^+ucc4X!C|"D W}\N.v)I^g<[0jaBlw(heQBRXFՊTCzր U @2!s6,ͳdD).XZ#eȍLWjF5hփ*\#|r\vϤXLGjJҸh 'е-k$Gv\MvMgA*R3(Ѫ8֨Ah`0hYɌB^3 _ԀB\D/čt|o*@ɌS:p*q˶ %&I%@b~ `~?XfИ 9$U~ RLȎYЬlr&$CdO](I Sh$dipM(uiU~@HD;`L2 ՠ,j ZܸXKhs??d﬉*ժ /Uw_|>[`NcpɌ@C>~W[6<_K"3hfv Y[ķş%]< MsOG>@_.n8r3]N'\#h>npqe;j=r6vt2?}_?b µm&lkukoZ>*Zh%:~DW撳19؍ahЪ]Ci 9^*0r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9^:͘@Q==gO @@K(9^ͧ r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9^{19OmV z鬾zuB?uo>f/ob$G떟Qaszz67/g|`.ۙ^_?jb(S[GEf&_.f#/u꜎Jդ?͛GEri lC--NJ;|ȵkM 5'Q=g0"kQ]v*1E=x-NT"NLFpiG4Mb<ZukJ㨢y҈ S'\5i֚C]Dry7&u gҌ7V:]5;|^]ys?|`CW c?@i$[KvČ^<7]LWkgJXt=L)?"`hj,t~tP:AtJX=yFoEW 9xuP:FtJZ+]5^ڳ ]5:]5]@R{GDW1N{Uv,tZutPЕn!N9q% n۞)ߝ%l]y/׮rZewzm:9<Ӭ) }WHeNHnnt+.;!6skMm*gllJbm="k'e#R;x;7d? \ҋT;mǡw֮qV#{͓75ިG{4ӰJo/Wc:\>|/}Xc3_wS=Coy\/emkm"Z~a6WOo0-gY? >6ȘS/{fl?ɇ|ޏM8L@~ꏝ-\ڠ/F͘lއ\;F_۽7-ohn:Kq룸gt>]b8g|6s{[vJ8>D-Z'ޖdZ[*&W?FY|Eծ_Ňnڎ ۓv_>%n^[5i/^^i}i]Ox ?kw]{ᗰz=pvT, xy|dKTBJWpy.p:Kͮ_uì/%-vn? j 1Ͽ\NmhWSd=&O;L@Jk{(-#dɔC|׏V]w1lTƠB1ᐮ$Jm[^Vxg t\*۾-wVLZku=jNGiAk3+s|ϣ6f{ȻMJ;dxz$ڮjp7 y--¬K&.DNtd1D^M E,[Ѳ5f p7> xtsay&ܹyzϦ,AO),ѶrL[t11] ƪm> UHiK ]c.:KQqs 1zTᢲ}X35/(^pLJ7Hcה/٥]pU0? +BltX8qa8~+-ٜ$O1M3Am(yxQ[Qы`GXNAjQz`TNwox**bXtw9Z\xik3rETg O?=r|nI\)MWUɵ"i__)o;߽bJ~}6{ܩÿ<潯Eh|YFɗbA '7+C+31+?$PeFl͹ڌq.W$(r(ժĕFε $5438V r -qrQyS7CZOvyzvii5[o]f;gl&>҉pcle[b%(ٻ6n$WX\e,*uyeW|Kt"qƐHIe eSgzȴPm\dmUĖNfE8{1 ) lJcQ(ƒ1P 3KUFjG8ÆԮ=j C@4Q& c382DH18g2#B.CrJWCr-iC&fyѩeȒ(\2D ɝ5DN5*a5qaOl6,D"GoxTH2zֹSQ!aHwQ`Md9Z P!(Y#+3VoZI)* DddhR"OZ(&-WUFjGH] ʸ4YMKEz=.CチǬT<3Aj\JđvGE8 21鲴{Ůa58ye<=@yך/#~/c`.^*ްFx%V+n̽3rFA"~|G㭭%4\Ή}B35=k!74F,%S)u;qs޸{w9ORD"N,b2MuZ}&<R|p#uƘ̣J]x-6%~5J89l7$2IfkN 3O|-^u,Sy6ILW:bS.>V*vqgSNuI߬)V2^:-/(/{6p֬.xxz&n`66ըȠi,KI iӝIu|R]'}>=s̃$ h @Ӝf!0F0@땎!L X&'F7(Ҋh`Gy"ȀIuFȍE\M[ԏ%h: Ӈ LzhxX$&zRNӰpGݷhBX{u۬&].2EM:d%8P(Ue !Qu&Al5if2DNXsp6ӸB -)V1R!dc -Q7VIɂb Lj NkH^A$CƈبKVtufJicݛ44q "8N&B(#2(r s2Ȍ[XLjͅ:$L~&lI#Α<"FK5,jxg?$nK!7w)k;>fޔ;d ƛ"4f|(RbcZsnlxrkkuˎ?qI\)+Yc Fѷ6<=|dty73Xυ{~<}=Dsř (N_o&tT_(BW8> !˯g#Cί!?´xݢW^>28`wmshijh1Ų)͟_^77}J~QD򗬈8zxm2хzu~IΞ nܚΛ/ې_5.foiޔMQˊ(%%Y^>|7{WR@G#v XN?wrqWuQC[o_p}.#k 餙XݩҮjׄ۫BW7 6Q_ ܚϓ;;4C̪^rY.>ڲt9>o*oA`̉߳+z3VFWfkn']+t6|B'V Yf%D/hjz)~1j3K|&n;N/7h=?a{O֝sGE5tԡQ79pۍ5!kf}6x22*/d:Q!vJ: |KTJJUV :tí.AUZ 4$3hgAlPX򟌴9 J9Jp-fZ,=U)mh'WX-8tOYSyO;$W䅝Ǫ8T'ߏmo2@(6 J'C97{sI#*|R}}lM⨀1 e9/?2'6$LR{m8km8}D=c^+PyKį+BȵOB vvfUjG2̸F;PUʽsxyʠId Z :8^[CIQ;PU+i!̄Ҁ6:Rh[n5\q5 HyK4K.G`t)bՖ9IY,{I U;79VHaTۣݲVѠ]ioI+D`K#Cgzn`03kyDJ\SZZ.oduDe*UEEF|h *3AdE+|%bQ1FvXY<KeV̝:,Tk0/E䤜%\5^+KPł{Lc{N>QВK Ey2&'Q!3cj-}pcc\0 aK{^S//5픘6+W Kq%DeKԅ"蚲6J $ɈgPXcEzLT_t=Q(&TH/eaçrN^E~2) JeP I{6E׾?| ִѴGrɠ0WRlVSsb[2cYxjizSUYHoaz]w掳ay||x}> nņUϴ/z-#R up&]ݘt:)tfF,-VP0ʐYh\ }w ,p/0Q#V$dʃ0e:HW0<^%^-kmѷN?jUgA&p q5sBJrƌwWں>S{wh~Oq O+ze$,h_.fm޻0^-߁z'0tm7v.*Y 3'Xwxt\ul4^ƹ2-^շMK ?OSmb֫ 'PdcjmiԲLcp<-*\|~NS}v<־/[Lf A^~j~:NiD;?ҩQܚq0ZshlHƙoIy_Ԛ'QaaX|bTD)# ?-$Lm)A)!Sϓ^4ySmt.2aTʂCr:3S1"XSCX2`Qf 6XN'u*Kg2$wy۝7֤k =yv4aaouL1dz.Nk  0Mó#&g^lB4>0+:V]dQ{[Fۄ(=FIBgk1x‰mޡg;,y'6H_Hڬ*BQg.{;xݛd_:<_k^zHͱGG EGóYiF.a46'@LQM Q56C@"|)H/ Qr*2ը| S!gYE 0F @/Eg=񥍑JkwOYcb5X@DQ*FN)dHpD>AGF^m։/*!o ImXS^X?% +Az!.8xn}NA!ֳ$&PƦ& 3"&{ysl[S-V=:&:4O&J#B*9C)H$ ;g$)bfX}0"D*Vh@90hcP돣eThUEk> f[>"cpv|i~Cj'܇g;v;V覛+&\NI7*g)`p)z8+EԉskRV< F[ N(M>RZ iФH2/sJdu`*Itnyט#|8onԁKV1!4iCh A{wtѵVE(lx6 k7EUa1H- dG,ꅏl[rYO~Kl]&Zcpvttmvv~՞pi<ժf<.چd7jKPj:Ӧc^m/^»7=9mn,)v֡M'#PykStj׻ۛOhtu5f BZwovi_Z^sL SO yq؆,Im>FUսy+A'}zHb*Dъ$\1i )|BT2 =DJ֟YEy˜n/KX5iTujf*vfz u7sWl&zg޸%qLDR#b H)F4.4RW`mSUTv.ŸE uvUipd|؜hj2Oh\QfםLnOIvxӣ|w>.di{vw gi, 9c`t չ$,FP()܅oI޸8=?څ_m=y% u?G6ji";'s G^w~Sw91kM +`H9f@wzާl4_{R/9Ef.wNec(QA']$j:,F*؂Ln C%FiY &2kSLQP6m m49:'|XQ;ÊJmbt~;s#3 > *C"OZ~y&2~N7SC;+LH1E=nTTJLY6t`OtBR2ZJ%FQ=\Tlz(ʁ/V) Fkpv8Vv&c[z[FEE&/X`z5M/&Fp:-6TD|0Ɗ|vT@PQ!B8룰: 셬QiUͦ.Y s;(@O$*"9߱l8[85/V3v`߄=3!Q d2`]BxId)b֔ȁ|ɩLwfȌ %OۚL"YE'm.&Ĥ\bygy~}c-"vlE&6@H:_1RJ[+ *,JXJtl'@d$!@ѳY̍ɀXlETv&JFx0|J̤0tR p"v6- lbxLJi..bo7偲F g$>K 3fFF&hCN 1R2^a$Z'}.vYΤPvl3+?{WƑl>㬍uF`?B(!G CI(i(Ȱ(LtUbYU\z?~~\+U3BL_nqԄ#ր%%ahrR8'ѥ[-;xLsu3$Q7%-% "0K$0;DDc3xȹ"&fZmBy[x G* eJ3.> '$Wbūdi9arj.lPc OAp U3&}lzg@T׻ `< ūOW7WcPj 1`\i RXKKd*"$ o55v!f+UohW$XҡHs.z''"Ot렌BPQ9J^G8B]RSzpSCӛc>*I#XQtRwp)RP׵\ᒙR3+*ȱ $v 0Q䒲39@hDZ> [ms˫ |/F8|$Qs!":Askx0Gq l# 5LYB3Ԉ C}Z[!$ &!kG,o#gC>gfuQ5OJL7 +Âgޓb[(sY  QJk}Ms3A~Oyd(W\X$K 2XTP[U ¸K5+lk'=-~BZ(L$p0 qb$&|jH&Z4#O؜ۼn\CnnC oGii=OR r)q\А$ОXjAO! 9z+U-5"M%蝉olܗtWniB'AA9$hi 0AQQis$4.:v{Neilܗ;LaE(+ຫDZIY5)k_ Hd!u6y"gco4\A*وހ $8fX :RX#"ka(ʿ6s5jkNE=6De(ˢ rQ"Z%)Q„s>cE2Υ ܢ@:ik*Z5akϴKS4E%tyIB? ̣2ylf/i!y8̗G 1bZ%P}+2Zqş R DhNJH,H[/l͒Ԃol=UϢzvNm ~McC*(QF`'2 (  ᷊h1Yq&Q)ML $N\0(9D1%b׍aw \TJD uf F8ig/\]+kJNEvK]xN\( D$u6*9JHM>٠ݐKbD:݋9K:ߟɇ%Dk}7-+i=~:_>wҡНsP~鎋ϛWƯG3tmnxsxyxa :w緤dQ'VK/?DwsmR]>O|7$PA\+b(롶mhӄVSrZ=[:ʃy4 C՚ŋ_!1 loz/աr/>nt8I8< "XQu? O?݀F.׏80j5.uèly5>SGr/nU*9U//~*dP zp$s7q0mc:asss$PdL-0=T]3uxx2썦;(^xWLW*&CșO l{5|nX\'9a}zӍgxV0ܿjw3 x)˱KKJ&+f8VkI; [:㓐I&slP@G ,Er %19ow)pUC2hj(c$rlAr9"0i5bq)ZE)p"g4ݥz?_Z.MPܭN,X-GFK&%fˍ#PcKs3+Zc0;!``)a?r ړZ9O_Kk DQn4Vq4\Lzy>ø!$ʒ4[.FF0ZL_[lT< 90^gTŌD sbI#p K'B`xRռsqfi_e_gV'^M' p|sb^vO#bdHp~dA;~}zB dUOԮIț{:[ [ W+Y*ȆzJFQ̚^=Z9=tW^Ykz]hyC5%NÐo=JE޸ڧ{OI|STmzNoz Nǿ}zxOo߇ۣWo(3G|}tj!b[Ee&ׇ @;tmF5&z6+|^giS0׬qeǫ^3 ٙv[6iBgUɠ+^,b} l~6錊0EEϗj*JD Ņ\-gF%d'FoN01!D3ͥE\44AIUR1MZ"%M{yW~4H\ʮp\d͝%b|K{+̾QpOu)k}Fuv/9Lgw端 x塳 4|OCWjwݩ_Jq'0n6\ ԀKZ}RpKADaNeɶ NK [ScQse+6}z (%Jxm'iqc( Dh,@{Ђ_[1 B2*ZFT+q]wH.'!&H1w6?@p I7VC+bGŗcl*WA^{Uzw[[_27!ܾh&)~q%Kgüu{fDF,~6$2g Ŧ/jtANmJ[HD +5Hux<oJA. Qx9si -zZ2PdGHArgH+ Ve'~mLޣW"\HedDLIʹp^Y`dGOxD"zOԀykP1#.ưI7$Bv))# |w0CE9l겱ݑ}-}&Œ x^fd 9^zEg8}'O3[[}˼:+ҹߎهS9f_?i &jv:uuG0d`^ho7J8qL˗cj 'Y fkp r2\0{`qP [Gc)EprN69mV}֬]-cao.3B]]~̀ 'DF.lϱ;|4]>n%<,ָ=7\ [nwjvE;_hڼ6JF'ky jx3/Ѷ=!-3ڷMܭv^7wT;\?\N&<=mEGz_e-7TxWr<뮹|奻Юke+PFZok]/TP8fX98$NY\[c%"+*<'"Rewm1 *ju\E WmS(ڢcV>HT{3gG~ɔ/CJOV72\R쇋OoYtKZkWIsMQ/XH'1%˥@KQm_kng~AZzuL ?f2jgҮ֕mqWTeLtKg@kաr ,юJɕ##={2zJw--LO <֮^uMjh]5 K+"vzp*/g>qlPQ2mhظ$m舃?V|fLsn$9<1;/m"y c{8Mb`65smjnS҇n dZ"*non\YR7LsbP;;:.:Jڌ0Vgު.R-)qZm(6!ꈬjr8Zu)moXٜmۂ2g'/XMy6C4SOE\Og9^wUvp39U>ӎN彔Nq(Gw&F)tma1SS"|2J`Mf.Xq S.$+RT-ˆ BE7uNk%UmQXA dTҬ1╆9?|P]ԛ9)m%z`XGTz;6p[L[X݃H}ňP:Q`wÀn_d?}4]tT:p>u u)N:YchΘbL1tBX y##Bp%vjB1(2Q,t#qw1sfmDž>aOJ-zC_Osч5;ONI@dW!pVH$J%_ AUiϳ9[UL jIS@K +5&k!1.yKT5)ك{3gɳ?nY `ֿ"uŨ,GY 5Y=fpt!%R!լ5vRJc @f0fWZC+hQ0 +g? Q\5s=j{7YW/J]]k~̢+w·G?ߙ?yGʽuGoj~i)j,>:;A&Bxkbn> |:9=u)g|H5\y~-d>0{G$ hr^ƙIO'e?}jݻ޿{l}mY;3t囇qfB}UZyj)p4}rq!-Yw=vv_ZHwEC )1v8 _/޿czq}ZZ)׮𨖍N-ߖ][|Ȼ8,yDjWGˡ+f;%׬ 'oV Tr*gU<j>kCC+2z/^\5W&VWbn8(fCf%^ \aCBh;0p PYap%Vj=.Ht5<9?|tA-WmcRw-vf]6?gAT*wPv]+9DUoW:-okGsZCyXȇuZH?IoG򷿮&Ż͟]ʦ_@2A k#QfYy6'L#Ũm 瓫Go'fgm~9OL._yYZdx{կsތ^LW$&g &yu fcV׺Q \'A!V}IʤՁfTa8f_W"RtޑؓMe٩Y.iz_)Z,?vZ塦G=F˗.@\P_WJN.xTNg5lc] {؂Ow:'oc$oȊ3_0h_}%PElwjI%,R։cB>(1bxC)TwD9j,%3hQ1(T36kT:`|x{3g-!{H N~n\J7{OGM&U)xT 3dp (b0hWTN~̂ hz^9˖Y3֖9oY^td)W ^j]B$uɫRr19Y% Zo5p'fSm򩵷\r67.ߞ?ts$F\',h 5&AD,IO%لh+2jΕYfcVb.l2kr嵷R}#colGJoX[x(0XW,=Շƌ,% NL"/~AGO'ӫ%MyV T6& 27^s(Tə bEP#+_B =FYfcMMCВ9Tv*b5Ÿ|Mbߙ2g;b$yt}8hHן}}$)>- voPCTӨuR9¢4j0SbU6'H5uG^Ѭu_ j ݸdi40餍q9?ڐIF\f go~6?|:_l]k&^iw D'W_uX^9\=˞ uZov[Chkkۑi=e3;G̵ fZLhgeXsf̑w!c>Z$RuߕHc>AR6d]Qy>m#/%xs^3O܇2s EYE[Xu1  `w[`˘8P=d:FY^1p a7u{Va[|k7s7{`A }~_尭[;eDu ;}3Her"7q#y  ,ٱ5v-#CXIY;XPN`AiG 1#8.l4ѝx^%C+cP_&L 3xP$MXkH(CBlD&Z ;w6EYINR6AYI~q Q$2ir˙La.iB%=Q9ֈ kA %4 $j`ECDc1sL(Q `c( Ȣ``UZ)X+?"D\j+rn OA+m<]q-?V}/2&!@LL 3O΋|*_+'d.|c@LR |׹<]K(`+bC66[#?cyH0Èz8Xģ`bE\1h~ s+w"N rH# be}H^jʈhA #(H8Hw ;#g83k\>M>_T3HJxd[2n. yShg/Z޷Ӥ$'˥xv"rjs]heΩ 5>mqlCQ&?tE4/'z&Y-c`* tdH%3͸Fx> nm#d =!"} AIY,-,xGI4 >\g쐲zki,)tq^w';x*NSu-W7W"x/SyKP Ht;pH٢am ?hʝr>̎7Wu"vY/'q/SdlH)FKw&hys:/b]޽LKop%t~li9hzMl:NnH''~0gŊ5Z~YCykGVu˳4fV^PlWلڡ Fٝߌ'^=3ƒRirL#w$HP? %#qJT㖞:ʭbm۪t*OEKZޛHV&*Jp6* LLmӖگ WK<^˙_lXUG)u)Ǧf"ҧ0ϪK5#6۸ܬƝBIʩqểP^^͹м6y"HY8FEcv;C᧓l7 ́F"2L uS̎/ u|TÀP,+atHV^eH0"_QuR%> żjtM>3Y>U[pMhv9J07-tp~6?ʕ ,Sۥ4~jǘF]ul~[ZqKH]BomhY:ڦ~cb~|jm!„93GEOꑥrd6HъIǯNLq(gukKoo鬭 ͬ!ŤQ0b| tyl4s Jͭ6lk`Ue:! 9KU/J^1KWC<٤>BRJPlTԯ37;ûã7^&}_o?x&WG?U/0`aɃIapMb7ߥ]e[ڽ.>\_OeЎjeҏO/ zTtѸ-T՚ V( 擬R-a*$T3_~0Wfh#mb;IqN `>!#RKb ZHq )3Qh4N;EHWѻN+}7?;푉 VL'(D̒V3g^aN':{Oc.=Ky$" r1 XgvH~NyX`viȫ5EImŋ(7QEJFےYŪȓdEFȓfLT,%_K{xS<3V* }}fv&&}W*|fdwo/Mv.K].H$ZoSmB£eǔ^r8KEf12GNP:T^/tü1sT l~ASwuX]ԋ*cnV rUxp aV;f!iLk.km}=2riI͇"t:N)JfП _oUݭ]-nqv>'U>{O|r-WW8s?zk.'q\r勯R< :_ύVIvUWbwc6A^_B/Сʭ]j,dZTD5bTc)KrQeƪ͐Ɨ([2_VӑF5@nbnBKRN)[yT CvS,CϳmyjՠXlT`L啇՟~֡*4̆aεu7գjw|#n͏E00',u>*&G ^TphSb 6FiY2ս;rA?yLX;7:tB7'9ޯ[+- ?W̪oVwP*͉ c U9)Vན<7T\A0'5ŤnJG"i / ;\lQ'!k[âq=wNYۄGKЎ\b" e(P8 J+Q9opȠll#Iݹt~1uGG#A p8?u R_H.S W謭YɺkPԺd$^9W`Q:FZuH~E+T(n ˬѵe89VױPPf빊̼0߬_ܧ}9mRڜcώÎ$XJ2љXJ2 ݕXbfm,VZRXJP)!Bkegઘ++pUUpU+=\mP(o~RP||`5\^t >8`꣒HOs6n? ({K\04ۧb0bg`BvgoTpT2}0w'?֫nK~+G~L餘[˓X-[^t`.R/C_t|7z+R9)(ǧbN"eСTgb]?(>x?hưKdb̪LpEV [+K,YwU1;)VC$\Bbp4R]Ap~`ft6] .LZ7R"vm} L#Y5$*U\%h"ˏxO;Q+ CLl&s-tFkp_J4=6lGv|4 }g| ng.\95WRϳUJ>dWv=7´ s ux \kmኬl{.WhقaWd.p*Jv*VnZ4\ \stZ}?/bX!\y^tkmb=\AR`aK쀛\|4\"{inC( gN_=ǡ2>:ܤST)fQ~vq&.rExFc~yژ>}p(HbFAN%ԋ>#OFi"EWUVy@a%~_^你yHvY5̬R  ]UYr g5gF亗UNWC 0H8 Id1CJ ;䊹Qrd-_+TrZ-P FҥWg^kpUl[.WIw) ` :W\ûWd-gv*Vnpv"%"-םbĮU[Wd%[+B=~M&__ ꟟ $w [$~.asE֋cvgZ7H]2.dU`ֹ襏 R 阓+pr,Zq0YĂ}{XaIaQ}.y7K݅G:ӯ$gj|F_nJ|]^c8z[KBK4pt&7:VA*+1ܥ!I3[ uKx!{|4sDҘ9G}JA4R)+(]0.&D4~)Vi ȼ$ J5 &)D6JT+PWVk!12@Rk Ob )2+4Heq.+OWcTEQX[tIiT4Vh$0= $;hӏY ?Ӭg~k,Ցms"Z~zuJO:N"˻hV4݊ԟCrи8KS7/.zq&mioXcOѤq&wt^2ӌMFAW/x7Zzv[heVy;Oxɏw7{@,w3XlJ|)h4k#B 7&EN=ӺӲ );^̺ٚԳWź:V W3MrTBT!:eLRLɖ$0,;`MQ2K 3L:ƤmA6,iRr)ie[)E\ňޡt6^膃%3Ȁݚ8~mV53d8}>qayAe&%OCxD#ѣp:QX Z7՜=42巘<$𼄲I%fC1sȨjF*:*?:lsBҶeɃBm +yu;wwph}}p'ŁX6||W*c2_m9~Q4yO4H4XhC 2 J]j59=+/贵1K.zR*3z]6gpDk8W2Tؘ9Gj4yƩXhBcYpmQiݷŒ?l7iq~kq8ysFqBFp~C}(#^I}e6@91e$L'(bK+3c(bEBRz4 'C,0b7faqܐmAƼGvc 1J,$ȝ63fyb3C.xgΌM9 Q<$h&Vq L̐PV-Aq$" $qa1g}Zkfaѫ~0˓ f/W~ RX4?VJ1kLʹAkY& &TL'L/_xj~{p|CjJ r-0*@c*NDnHYw;{Bbp@+쵔.1孴*PS._ ?șI$yJ[Z$D)H(GSv ra|P+,&T~N{e>k?dL!;NaʁOm8OK&hPhndb/ۦn\%u!xmZZ/D ȹ sASo3 &Tep 25h[beC }0g꫉Ⱦʧ++}.&tFH zugLP#!r箔$cJtсE6G=؀ʾSMBSY;hܤuz5dP (bIxv-˃o|㏩֟r|63a 4($$烉K=e͈NUNEVjĤaY#TG ł@d$\IIeA\ڠpvH,MW-!pzvR)c^lp3EGDWU [^m CΌ/,oЊ9d&`$q"BA+ju`}Pe -ztvWQ80ؠj{vc2Od%TÈ^t9]|f稪ӍsTs% o8RF )h$Ŭc7yLm.~ߣoX(gg⩜̶v'aǗpL*y6?K8dXi 822>@HȮMdI̼Ȃŭi:|ໟ3Ƌ.Mǻ<z*ӻfȯ3AG*m{nʻjRL̲,iq'ꁎ֮î'Nݰʖc_Q#1DjԬfnsu]L۬&ܪA `. = bms XPj *BV:>dPd^簳VlR'f3'}ȑ)M:;c0P ВI|31zEC>i*{e5سH3Su4LZmgᖣT29܍' ^(0Z?ֈ=i~|J| Jo?/-k4G+856g=7 S P|Xg|F NrYRUN8Dg'e9Y}*9S* 3cJlUB|\J9I9 ol@ WTL\ZMBEpv<ȷTAEY- tVfN VK8hrhp{o'RE/6u=s:Gtb|K&L.Zvn1_W*UZ4 ta6eF y'&)!hT1c'PseCBr]? yXjcʒOLTBqPJ$h&sW0Hښvz9iRY9=y}ꦛX_O`{{ByOg}nf[Tސ4u6fpuq6\'s|6^?%qo|W~~7wxϯh"`G׏^Mu]0ԫ ߥ_=Uiէshav $/ƿPW[Df$x\@W18?-IlvVMt$ͯGbgB !`@%2jH7ē$xޓ+R&9J&IL w){r*Y@ee Ɯ]49O;Á \. [kк,ޢ|Fx)TysPDj2g_C=j uGZL?]Lbg}it_ ?CGͿI]okôCxVT 5^TLj`,F+ߊ|{B'XIW'Xw#wAJ;/- ؑ "=I jqhN(sɅC邷C2Dj!W)P.pk11>L-e*|j2 ihW6'lRݻE69넂>36l ]]Oum^'_Ɯ\+ZS+ KW] 6gʙQ SxSp.2;UӺüۥ\6en]z|׸ܶZ_dM~5l8[j;qSsw/ҨĶ.M]Mm_K}-/]nwmhݸ.m1u|͠bQfdta8^bwWm|f~={?0ᗟ nR ᩘ;`s>86ފ? ؖO=5]?GQ]8V LPR2NEa + Tle 82qq?FOHO/]>ty$¤`)(GNYTiɉ֊Fϓ%C!"Xީs<S ĩv!ڜp`2Qf$2 [GvRy ЮNf|%ćyY_"X+"*SVM9|Nhn *ɔZ&97n1TG# CNISޒY c2XEdr*7Cci)[F+7ǂg y*;HּK\ce[ā)hHd̡!9$rYp.U؆KO:cP'@` Id q.ǟ$2A3⩣F9 ZSD/NrHk^pe^mq2i ƚ읇s/#7}}m/qhaЃkˇ] r!TiacBʉh+%s[wIQR@l @:u/΀OJ)¨䍕Z4"I$(Qٯ<*{YȾB<v=N7tCeMbCztE0P,:Z!h %)q#SQJ&k:2KCFjV"9P*FVG#$K3J1&*yb;٬OW9 DZU^o$ Xb4h2YYr%(+ޛY\-Dz,%5%ALP+Yqo%틹v 4Ts͕4E{b\,-S7WYJQ՛4W F.?]2Rڠ‹K&6Ɛ 7?|?NMӀo~le%/cZP_>la)I|8I>qp[oǟqYG_|LE爷OF -JC̾}6+ ~]?j+Ug~[Vwz8Ž'm'&O*q[&dz^gv.Z{;xxl_skw zwkpA#̿ xЊaF:jȕF1|q5.G9{wg}-12$! 8b`v9vUtfի'oGkQuâ d1F+זEmhɗھA71Rsψ }w>MKv8j^Ҷ@׺RV[F϶3\ >?q,O\IM^=LYZΙR9bLr#s+OW+V'xߢL2'4 e>eq5鋹Big)O?sf̕>}s2ڛhYZN\e)(՘+ԋs9㣫g ̎^y#'>YRuZJ<\b^:+.H zcB\ei9usR ʋzEx#sV7*kX_J UbޠT+鍹 fiO |)cG >DXW*+z,\As%@/;M)삉s#A2|ag[#3 31Y}Lg)u!oL+N>lCl>we+`?e0l)eMFq1VCsj@H>N4Uc%࡚OFѸ`c\E\3$wI✻ !U¹M3 `efXY)XZE OL3Ub9̩we^wٽd9*$TF ̊hXc0.KR.KR.KR,Y0X:Q w*pw)] wݥpw)#R@R.KR~ЇrR|.KR.KR.KR02K4q,ҡtu5-ݹ9,-'~4K }@R}hn>V]"IT1d/*WU]u.9HQiDtD Eb!GF%% 癋&H)0QX&mUжP )K$)P.pk11ML-]m+proW*{ic;;o{֛d{o}* -|f7tw=}[[J{[ V,]ue3o[uǧLIWǟ[WM6tzfe4tf!eծ]z|xs-Y٢祖CUoqwn۬yDۛ=l;qjsn1.{zi.h٭.vQ{搮F7nV\p5ޝfZfnŕx$Wz)JW|ee> wPR2+{)r MVV@7A,)NSu_$K],aUZr'C!`y Sya*TwY!RIIh2 noo]X 'ߺzF i8MQ;DN9G2)"HgZ**NVNuHH1" Yଶq. TX\A!:֕-7) ᒦ[lVJn8q+"[Cc4A^ `|i[ .+V "}$(vzJUti3B'D\ D3{`TYgF:-)_W¨rv4W`P RͤyEB('MтTENHǦgZA~е7%+V qwq܁*:"{K5m ݓnQ<İqNvȋG[HOlDwDe1Wa-*U=N? v~kf 44(Wi8.oέ2.Qm$a2sl̯#`pi|ɵgn_Ǔ_jYפ$~z=h;@w[3oR-mr>l :b^RpgԵM1Â^zzRȒsJJM$*@D#w&@)Q P)1Z k=^qf ^' 08{[7sT 1Swk[v[ R{nu?w qfB:%UJ ѕ"V)gM2PLB c't(ġ0P$Y .7꽥i)r#<p 2xƅ.)]Nd񲙯_>?'8xq%M" i4(x `bI 2q/eaٻ6dW(uWW:' $8x%Зj0E*dGO(K#Qr˦1 Kⴺkz Axa (>.NZ֟2H2FYcCVΡ4QhpugV<,tZZJ}|ZfwşOn.Ayz=𞗕jWVئCX( LbgE8AiG1NU?QR{%eh-9ExrQz(Gȁ+F)5 CA3]6㌧bmvg/fܓc2tܽ0O'}3b]gi(egX2-sK!Tlj0d"r oC5Bb6`cg[ض@)Hֵn؊;gynjb&$&t2NRrJR ΙZ$+9n?ĆE2ku-E&H\282FLwY$6Ɇ퇚0vFI}AfTD8 }oWe1@# T_~6h'PdO!`2WޱlF%}1UL̑ 11\JlI5i%0"?!~ԑqqlOf\D\-.pqž*&؄;/*l*o2 Q:#`k>'E[>dc6M¯O9;DoY-J+1l yP4NZန2!E=< HBmɂ(|\ㅬDH5 󚅝,ʤE;[gLd:xXΚur@:FǨ|y_S9ݱ(dgNHbeMY׶H:0k -Ek?3KA8bD5hzڐI"kl !mZ|O=+`'}^m ǘ-h@dLs痢r5?:Ҡ#<GXپD6'E^k>t*.ǴQ Z R)TKMN4GB.EH3H[;vqmzoJ4c m,|Ʊt#1(1b9E䒑شS$JVb!ې J۫ba8Kz0u!nxo")5d0|`'B<$ϖ*eSޤ=]Ÿ N)1"A`c$_*4Ee޻G5f/⢝ȑg2CqC6t2!kV@V .!ʅTƶ-[FtHʳ X JSdDQzXcŭ ֊U[F ާit]:qGAQCP' fd| *gR1HӾYleԭYq.$*16F1c5&r 7lS6?UF~?oRr=xCR3.F4(02m2[̥.n"QecP`QBoB0E7(8Eeap!+ٱ'"4>[ q#Id~<;ߌ߳zvr:.we~/OWv,"d`;Q+ .YEHR!Wי'QlCޔ`DxDo0|K*-bm&)2k=/:CIe7dQ{e:ŢL-D}gJ FY8.JFIA (CjkF b܍7=|u\%kdHq/pq4U)]ލ7˗_9еxII[Uk6Ì st2GCbRZ'o1hT 'y^kC EU9mNL*GPɔCN$ūB4 OzH/ 8?3r|RJZ9҃dFڈtݣb{'QR8, RPMբGfw?Z(UzG)E-I|@XD6җZxoWdC*N JrT1 &j,|-(5J;`0ĄfFbܹoٍy`Vhkx +j2t>"SoFgg5(ɃW];sX~?*xr~r~r<:E>\,bɬX_%Ɯe8,pY(zn -YBm@7,oϓAK?JvlwZNz7G~}f.\iQ78Li8azcqn%PᣟyCٔEfPԚO'r#[םLػnP=VZeϝ8cvuߔ^4?Mό>H7)6hwrwOdGLm}!5jHcz MT58snY{09ar|xĐގ祌@,GnBN7g鏷boG6EA~K1M?)R:Hhⷚ/QyRy|1/MYZw$MC֦gas|wa uMiG=tL݄w=c=ɗ+6qՔ{ژ);m*[}%'ca UJ[>b^E%![c"S}svaܘ1\M !yMYЊ+kI h!UEˡWEY= s|[bɪxը|nLN` 5$,ΣNٚO:qd`ƜC]?+j7"sm4hW2&+Qyet:jmV4d4p:vzm|"U!)"e!cRzO6X%A SXVbj9'L 2?zwG֪yR{ @*gQL:V›Ŭ/G5~X2?5< z?V붳as|+lExyJQfl>aF́ TJa8BhqtJu|GVë{T]xqq~rv11ɬL''Uzg$h.*ώ_Ni# ѷ+{Ӿeezf,fxWQ!gF/לN9M{lUN kfTX2zQ9|PN=[M6.oѫU_xH;zˣ׿So`8#G?[Kk4YZ9|xiVZzwY[rϺGk I:Y;R跋^'T.7]FՙV*]gy72Zwr}ӑStT|*-ki ı45Q0nfoNj>jY\bsDY6,TISg&$)EX,%dWrOp@{r:yL>Pjl2xЇdUJO|gj{ȑ_SlE" AIݧ`qKѫX^!=Ş<54G;ݣf=||7I]t!:]u&%46&H]vwGWݤ;WЪYHҝҝuJ!Ј1iy9y^yRS޼h˿|r+Zw2X7OsKVp7s?Zg:w_9HrE,:8 ^2BkY1ֻ7ڔLG$cWyR箦#9|q[~8_csB kr]oƩ{?=8:u~tq2Ϝ}8=Z|?qJ2ڃ^raEݫ ]zrz񠭣'}?zB;CA|{$3A^^$u|i[Vl|ӓA YSf=28du4F;fBc١wzZ.ݺ1WK6tV F:\}FJ1ʀAEUMbB-?Bq)sIO[n^!dB/:ᰌGEk+_ -F? OMqN>uѯߎ:m5ۯZ~;/_l;}D|:$rK^ qDzWL&v7J"u{Tʫ+>k@Z4كhdtM[f*u*SɃmEp9XT&zm3sL(5%xW(UZR&ML<=~";&۟gU^2#K|9C.)B}?PUDJZNQ;l9X5ev>λfoOjF –dSƳs.Kvo' N{4AJ{%憕qvլt0p=.wG^ski V梂DžY{jmpu;+QO -pWM٧`TvoઙUvՃdpլtjgWX&hSf3\=C2@ \5sްf-UҺ!\!zo֮3\=Cܔ?Kz(?n섎E(VL`_[_@,XN-r7[?/Q Я^meݯ٘? T}G%vkj]^|P=P@p -߿b꿷oY;2RVDኜb`l ֚ngǟ޼ 6T )E8)ov4aLߟ=fnZGfA$3oxyg>y59bSn_Yk'ܬ$OJ I ^ \&*Ӭ"s+%߻#\5sioY{ߓJfzp+1ўW3wfSf%ptn0u5\`}luvu;+ibzm\ W;?zCձkE-/,餵͘-wr|.!z]j9M~+ڼ'x,E<9]-ywjӰ߫7eEԁS+(Gj'N7ɖ[E#V+5C*oU46k IR)j] dDT8N.X\`($ * Xv&.F.Zu.͜ 7^99><ȟ'Dy s6)CHm:; QVLbtJ;UM:N:e g)ZYWLpl'tvYh)xLą.htZW0ԯXy .31>eq[*o풽Te K()ʝjMlT u+ rcZvKc%, $>6!;Us̙=þ}o vۺvb\F{eeoH7 <\iolMf+&9Pa* $CUkZfFhs@v]s3ϵx?<_οYovfZĴImV$̴aLJ|β靿5 fax[޹X, l CV:u=){kE(\%<8U(bBӵֻx 5!x2\v=uW=\̝wW~[IᶘَOޭo& ?Sǡzmz.!C>:XpR@b" ܠ$J1ꪞ.ދxp6&ăJOG<-Q"A7sTE{g*J` ]GڃO^PTrL)(DVB)X5`bQҮ97.dOzN`-|aa+ُ;:j^m. FD3ek2j0.C 2 %jl7UQgjc\%ϝ9;*mYbéR>DskdvU\ R Wײo'c|%6(i6w1y6>8_>voj|hoi|'782Y' 2zR$eg&`ZZ oh+R=}J I׊\ՙXz#c7snlЍtn#cW,XH3+^XԮCc[Z2ˀ pNÃGc rCuMZCJB\C2Z8m({h0)ilGarXHBUb;j Ÿ\\!̹Q1Fnj3 <,KL2T&TtTW *.9SUBþvC I[! 3T(Qtn+~U4kG(yC쮶@)x͜p [ASAncWD 1̈8#FIDrO)9` PCKNQ?'J`1銈9B٢Q8um8J'cbaD%$ :#b7sGO<$5ndG\T׿oVq)󀬒10#eR`!Ř WCy{ũaѱ+xv{wWkWb|bjȕbA@7MޣZ`2{Si_jъ&_Y\֢8b }{88P=:^w%֜fy=ɳٛ\'TOB+@ı$1 YkQ>ȫI{rbn\Xv3q~ǫиG]m~}27I^?RxmhRދӗSi<8xu0pMHi[} si-꺶@-]7BeX.FUEQrR.Dy"DŹhgc8$%ՌAVj6 #PZn H0tBX-t*A˷H7ޡ넪~qܢ oJ3;&5UL4sy2h>s%aJ^4͞sa5ړ^a 2&!f@lB(.zxdbOtvT3BylZkqv& •sz_;B`:Y7sng_lo7PTDWYFW(*3IeŶ4dy‰ .B] ||* 7G etZk  !ZՖ&ZC&2tqi-$@t)yS|v8aMJ RBSOh g5?TߴKg-?lM ]cgN6&vR^ݽ&d!j3jHkL\+WyAJvlmHNUPn~+"bۇS<ɔ6Dvu>|7)CAy00AT }vJ! ֱ$bTP,0G? c*mwa&)xp7DS f#Mؤ-WerzԿvS+\yE?ϲ`''qָI=$nJEytqmvXa+t 5SՔ&e%P :@*PQաDJ*x/HjrHj^A9S[ABOI!vmp]μZ>C}͢ aE||(Ͽ;x4lDe׭/'ƹ2 2 0dyPX1@3%nrd.Kw_$w=$*T`-fJZeJ[9#8~"sbr۾q:Qڊ\p# jWNGa > 3f {œ>@<ÃǓ󾿲z0Wk]`S#>\ٱޕq$e/>  qMAЧD"eYYHCRPCfwuOWկ ȩ@r,0&.@OGgP]j%^$Bˇ I0p(G6@X<{B93RL3q> nT0`"{&C JwbaD80'C;Ŝ(a>댜Xo{ ^wZ< i^;Sqrk dMvi^aRO-ʛ.x"xtK(US4(`ފ us%3x3O}/zZEO/y4#bz"BȈRBKC,S@-|D 9ō +B/y:< 7JdbA%Ǔz)b.F5``jC dJ~)a[ T;[uZ 6nv;0-/u!׍) A-+ΤT-1#,asBᙡrf)H/o/{Z L;W>^qOGBqJh48 n>)=S(wtޔ%{o8e'=r7r2Y҂ǓAU~8|`b0,̊A\,) De32%tj~T>d3`~ig"9-Ui/ds]k1Rw.;[;¸_fT_]1R+Q[޵{CB_!C0nhzjѷ F 9^y*Mh.vD1<)>R35wӲQ?;j2%NlUuN]2z-=jyg"e[,\Z((;[șrԼI.1DW X=p +&ucOb*ziK_$yx|oZ}f8q2~ 3: _iwίo2_QwS])b45[XwgjSsYio6=Zo-}L 0 ?`}]73Iu&޴eV۷iΠߍy&/mɔ 5{=`ŭN*K}"'553OOը.g"GRG0qB j&(S<GT9KŋN΋kC蚬@v8ㄎˀt$ϒIP y*yS*Tr)4v'AloܱH3iOuef(I}d#EF+&<2=a6XOl9V_kK4 eD_37(I }𡍲ql8)#CdT =$z0Z `J%ER)m)!EW$6!`B ؞0jvRʃp-AY/5^#ϺN 9;-!7_m0[^J6QF\8,0ZDٴ0&:rʉVQI% TI-2-<_AF@kZ(/(HaKñD(07HoGTN7)ȅ68F%"ZF6x24ݱX5CvH1c"1#O5?0|GҔ>9Ƹ$ƥ:)bcG0TpklMY ]Z:k@S%xYU~pfχ#_jI!;ctp YY=OeJ;S7/C^+ 9 T?%xSV$Aoh ?*xC릫mb?E^, Ϯ4/(R6j^/:O헥˪_4~F'q3@󢚧ٴ| ߙE%|_Ȫʭ/+EQ1! u:|ZV\!fa;B*gRbr𿠹-X~Kiw;yzYͰ NsQȥO-ȒGk <TGt 2q6BwlDU.UTW:L K>ec4j@`6NM!B2ǭC 8p{ykc ]ϦtAh F_np7jaa ]9YoG:qٹ}sv>ǃQšItε69F;Pmo|>{U;GRbY RY' c pt%&h &LnXP`BBLJuނi3֦ 5g1H ivFΎ}l"첼kOYh9vGEZ:ܒnJ![?[aft]kݻLeuٵK55"7V6ISD~)q<\Los&mjW;LC[fռw.wiyo|ոTMK-7daσEՍB-{Ϊr/z+bk6oߞm<ݛENijLe˭6mkգȧ#39n/`>vL.p09WU,kMn8)Yz}?;VmK(]: f&+ sy`iARG>j-:_کNԃ1",D2""&ZH0<)c"[>!%Ga\12bql~g?X{x{J燻EFJ1@(|t),!CSf 1$h\:Fu_3EZ:eݯ2 7-H}+7Qiq*D}p9 ckQ.`='{0S֡Y)fd7'.7S;wC0&:`d{biɈ-k[+}쵲2aDD&pESE57V"ǂ:vF.zIķS5jM7aɄQ-6AΕ-is~2" bR9c6([8.HIog2!O PJZQJ5[5V>%+N@GEޒ<29F }!qDۼJr*(ه3q.nh햁Qr#@tOKl4@"/aN(Esfͭ$GlY{HtH 釆D exkp$6QH<րقBVh)R0R%~2)J>v@DS%*1v_M ;96Qd\/f*D q[D 1vF'\.Dy q5 EQ8?&{UJ'bX.ϧ,);˵Hu߻MYUBYr֒ik CmI߽ع7-8͍!U1A&x%gR>dl>*Wu/pc(Dj55/ߛR6{WLj񸙔gOaECcmQ7bw}."})G:lk?qcfoPI4#Jj̍W4ZRGe;|=*Ffg"eGluw} sd~S,a<+Nξ̥?POϯw\@0JhjnIs!C,m[ɚYxr ?y]t5G<ܰ%k87c_>~EFtnx3x]wy‡zsxcsO7a G s/ç3_.Np>l&tKMdn\Q~*t-5=RoK yX{ݯByzǻOmaٖ#Z.)Κwgח7ˏ&|j]͉gZ*KpڊGݨ[ ;ʾZC(Ze&}Pua?X?f۹c7.=z|˰*mZ/Wr|w+w?܊t(k>+:|Km}}}SVk>+{ӬWyTʬ67N;x"t{&87'n4wN}'O#׷1v:_={$G`Ԇ?trM', J%ΙH̓j.9hBM88&Xh:2nh=vQ=W0)#`'CW.OWGF>6=+o){'CW.T誣}NW={'z=ti͔( .0:\c2CWaǥo3J^^]Ij?O6^(F: {U8աKON &=\T誣 rtщ^#]y]np+oVGOW@ylky/^+RjVv}yʢnU g/n|ީWmwK(Cf7Ig#uG79hwWWu_z+wn.739o?b4+N]b>{S}V-y E_!mY79w\~xjm})Rщ\ Zls> -@WM/$_rcWw7׵{{ɷ~;^.Jt -RV*Ps%p- p"qo',Ee Z1uzc:/IN11jT ISH1bQmj+1s17*.CkG~hjBqTA@&^ E"v6T"Id1GƄA<)ٍVj"BW>y I%c11WF ȴBL 69I .3 $P,fm.I'*"%\tɚ%ep8VZIARwmHf,cƤ[+ 3gU}h9Ũq'@ԛ>ZLE&$B;ІmbPA4P:jJa0 ː@1\*kbiKˡIdqYa^Q by-L | iכCV^|̤5iˎd<%C8dW ~saw@=2%XUE)ɑ V[5j΁Zhfo5?&!j vLʥ VCIQkU %#BB>c+U ]*P*<D̨u1gU$zʘC֪)WĠwb3RB&̍р1$q"b ]0,)j%D:*D{ ޤTTn0Le* e»P4!-`1$,a[PQBQ8<5Ԡ휩Ê[A5 <(GNX<Īuɕ>8ɤu6ʍ!KDh#24AW\ZF6TˆBsX"[|*xT@ s4VYAQ7fa)-%{6! TΈ쳷Rw@AqZ0*c:!^b2&؆yobWH%cjg c< 2lBE+l'˜`AV# S(UQfrtS Dzgd,͸P ܁p 6CAwSA 2Pư)0eZ 11 ȄI@11۬U:}n: :9 q& |wLʒsN ufsq 0Xƭ` ɄFB8nVj \|Mt&%Ji7 <̲w@n9D+ .GWKw20:xbPd/,,tB\8ǰ) >@H&50*21Z0҅a,XqyTI`1>(28hGx1pu2PZUu UUne53D!v "}dt__#4;:V|[=NȾ ]{ #FzuISc #xC*%98.x З#8Bdm5}0 ji*am#-Ch'R];ꀇrN1^ |YJ]`GՌ6XXj59$- ^c9=,@')YCnd,FJ3%2Fir9DH?<B=;썃#bguE#`<Uy`*!PB`v"b8[5Id4]K}tԙt֮iF6@ pf9vMڀJnd*x/1m,e`ԿQn$, xCJp0O R mti`s;.7hG\*J#OA#ooQG˪ =PhmQi8UQ݌MW=q:H1-0 ᧀGҥJ::UIМFlF1+~cZ B%UZϞtLPd 1r1&jڤ(4<{P}}mIW{ ~~0;yq&s|hB+N2|d۱i&; e5ϪbوӍ7SUu%xV}8P f(xoPp VU*02 S 71^/=\}!?-0Da o1KƁAsOgMWzj8 " {ʭ`~{5l .f"tU^)@8p,(X214(K|f$jo.L/ x2قHxFq%&Rе@wW4L֡"z+P vKh!EZ[,R:ڪ `YCbRey.dle]WQ &߾m}>[ DW2DCsɋ_`rBkWmH]`7Y cdHsZ)ɶIGڣYѾCW 0ǘN^/_ZoObHP()!c\B\0k6.״I&$W]2rE"WHk(,(W@7R+6JתTh\!]Qhy][ ?vwihcMB-a1;՛<}B̹0۞35z*?}jپ³9ι̤H5~*HHŤ>v;LҮl4T4}U̽%N8Eud[h$ǎ4Y.p[zqU`xOKvdu &dG5,gjX5/ͬ\{l3CKh#dټ~x8-NNI-/dr0;;ßvŕ9+f* t~酩#,J3(E SVTU˲p/?O}'YjThQeJUnb ?c$JkݿDsT B}M_)-q"<,U!:RS+T6L׊T29uaQÙ39'L\9aQڐ\SW/WH)%(W^xS8d qL&"Zm.WDٺb\GRZ4F]IF >"#W ևЯ\uĵWFu?:R0(qھ!]//I +yIzGZk.WDيY#WJ DBrRdpNE(c\iYc LG:'Hia\ ݀Kvpd+brEhUJvLvEO4LLډT@r}KA=޻1nVrU9J"|0iۊi}:ho/0e&*@JaDʢulƴ!S!\L+F7ɔ2*"n:LWQ뭌GR*劊&dqN%#Wg8"ڧ^vx"JcXF(WƨAah N)o_ݡ. 4{NPI^H(JQiKvRf ׮_˴\lIg;nyӰ&Q֪!o'i RN0ch&tWp^ލ֛g4Hf v<@Jw؈t+YIzl-`\yw\n:+҉/M'J74 ճUpFz\!}Ãn,$ZC+ ]8rH8M~N zެ#nϸFvR(a`kTR,Wz%Ȕ 0BHER{'J+Jyc!%B`D2rE}?ZUC+tjr}+!"\oS+5]R ڞ-!B`*"tAލ2(ʕ1ʵV l 7~̑NKObuCj??2/,d088$TD&\R}ݺQZʌ1ni+9|*reCjEdʌQ<IUeتtp"WD_DFJ'jr>!"`HµGWD<#W]v۾kWp}U'Z'{ze7J=0{ W]/ ^'$W."ܾҍde,W#+%IJg tu>BZ'(a\i= d+5brE6\P@)=;NvEF"WD]ejre{#Nh}9d䊞 BZ#(A\Q N0T{ ~O ;q$T+M6{Iv;&d*tjWh.:֠][RաKj٪r&P^V{9}Z`ɟxLpAgrDi3frNSZ4H\*).WD9%,W"W^n!s%WNtEvurl-c\Fʄ䊀}:Zm=Ճ+q p$t W v5=9hQBr,Wz )\!6"ܾw5t"J,W#+e)EWz_  "Z;x"JF)WamBrdp}2 ҪҍR c\Br;hz7Z'.WD9\mCr4,NwI 8K4mznA%#ӈMEe( iɽdO i:iZ`AdI@Z ܍^?9@Iʥr$Z#~$J~."2K-zMD٪$\G2T@`:"\̊"EZxU‹\MgEdy!m0t"Jrȕgכ# !݀C+:ھtjW(3ʰ\=ep p7Z.WDjrP$$W]2rZTh\%mJHt+5DWH\%8ʕ6+#0\>hx\!jrEp7h~"3v5ho,H^ҡN9Rsi;ŭ ^. }n k-I"@<ګS/AE8ygo;ǓYE"ukc)z`njvn97ל =OWgUk:" bBDb~zvk:/")JVEȕ*+o)sepGtQF+bq\~.dT:yط)bp 1xɋ}~rWl :ydjm1Ytz|>iG7h'}8Q.1Ltٜ)PHrc<_)Zo~bMሙ_%3KSLr(~[]lafҤw7ն3yg[>SFS?6 (r>d}O{@,KOX;y^oUxZϖ!34ɳFuTϫ T/PRH-LuassC 4нzqgQz~[M}t_?)l)Y_[ۢ~x N+!7q{0B-OzvE|ₗz{ycghuT\^bMpv0_0i4)ѻj  O/!{sFm=Iړtv9U~e546Gb]ɧ@o]nm^!Y InZCjǖߧW}9[rV7^sy6]}aj0-:Z66,/삧_~ǯo|~Oo_[7?U/< J0oԣE/_uh5ZkF-z5kZ>~T 7Q^Y{_>|z=Sfz9j ߬̊GtUs=gylKZ5KRY!*]ބF,e>\>x纗c&ɑncIq%;IT+eu!1]*Qj畕PNi$ M)CiIOpXמar1/~%s¹T\kwmh;~mp, Ovd0y LB`w@ Qf@`i3Ň#opZ_C9q 4lY#,5s8hcƎ,N~6Fyrx" 'h h'pN!=NT=D(>z?^;z=6q@GτFE><.tՙ^4D[4 \BНAQuCFD40@@"x*HO Q4Z ((U3ՃR2}F AD޵Jk,b(Fb VH2JH_UKO:G3qCV ٷ><3g{ҀT$cy/`5@A%> @/f>V<77r:O| X.XXh ;oEL2ie`1R<Зdvn>SjqBDCdҁLr2&・҂/ /PF"f%:C@!*bQbhb&,4)40h! f[hs`Uͮ>#>3^_ͮFZn;X޳)޸OH,:SxN3]A0פP!y="VH[b@JI 6a@(d,!PJ4R$7gߋ=Ne[_Ǐ>͍٘[Nѓb`, `^PH}1"A|~ˑɨ ^U_}~1omK[hzI.*ޓ}ϯ=MVÛ} [tѤ2"/)2b2X/1շ򍇩G$垽*CRۏQbuw^JP.RE`,Z%4Qt" *.dLdb4Q)|AD/*[uP7̛?Jz]3Sw3ۜg񖁻>wul0űfqN0U^-dJ;#@X#j(T9ؓQկvv*!ME%;nswGr\ڝKaq_e-q6/$ɠӹ&, S([boI^t>t|2u oy& syRzB1RQgHfG1HE R(lI=Hz. V9/=&j9'bNu(36'otY2Ft(~:Tڢru$ɞ=dP$|]f$4[Ei$%b cDrQ'BR.852;P,2%!l9:8_ CT>:S .K^>z2BS&4-kqr r ea9fAiDiNIg^ɢ5!tmEH Q* P"Hkc !g,lZ!o< g vg8pJfOǴ2s/?'`1% ݻ޽k৳aPv$MLE+ʪ'z CQ)mz5Ӛ>K Gr[Ā&e96KYgc$jM&Rd!MXݻm}߾&=A{%oEzuT귭ms~zdRlj,dS*RUi7vO4Zҋ!wM-(ձa^N)=3f| (Fo.NG1>Iyrꄎi8m(hDdS2+^6k0H2 M(؁y#=琕aH:}J=H0|{9a*L*-֡w U=R$U:GP x I\i[+JұHdKLLd"+g;C0E"ֵr[_QU_Y4]x ;Ƀv oRv>e:*" .PLx x5N**$H&7p1?Q @K2K%FQ7\hP )X Dr( +'F)Fel&~4fCmol ` j o]|xK>S?ӪdxqGFfȌ,jK^lcdh*ZT:`bO6=G0a rmo0ͮ !Xa<_Jxx73&zտݸ@wT@tBy1.'3h-24=V㉗'LPYqeiR(r00 *")5N##")*2pבaRij$P1DI2fi$6!f2|2^V N89]S*(]l668ܼ\Mgf\1h^ `'7ÀW ::'MbuiW[>o$=[Σ2"wEºEzDړy@nk]H9@FXʌS"c#"Z 6ŝIJF$*z` FdXE0Ϊ̸qg3qR]UPMǣtIPbHr.>Y,ڑkr&f$c-8ѷ='E=;6 Ac ӠOJlYD%)uLi07f(p]rB廯p1Z`K$X+6&)=Ek?$ \oVɭdi}KJ)%>F@-Y"st4FƄ&ïOftyϲ]j|VrgjPc"`E\ .Q,X=]"%^3t.9tcұb},淹a8G-|b3Kl"}҂6*b"AlTSs>l;Ae׈,ru4v.D{/ɶciYʉ![Zqkhzq-W!'"4'RiJ[/SEe-nKro\gtĝA6BV/kwv#aQKx`"Z<9hhHwdbZpU3L+'Z=ycyY̩1̣1Sfs%gx&q.[kp6ܽzzRz  {MӛW`.O3~=?Z VޅC/Xdy5taArA24qcYRgZU^P1'4K11Ma9j޶U&,F#VHFD:Q-2W!FF>}- 161 əy="YvZ&lpG3 Ϭv|x^UGsnM#(t5U'5]Ӟm\4{ēh9ML0r4#3$*7s*r2X-LB({ix&SZ1FLڨY0cՎya8{!Mt PsJ=ftx.d/F 87E}$)τ22LdqpP5i|v}cq}pz FLSp==S8?v=BG'DR?4 vƟ~r>YO>i*Ț& rפ 7 #_%0X\i :-ͳ6 ʙMZnl$ualQ~]iF몇jw^oPb{fW.7pw@e#`@*] g'M=3-| 0a~鍇iQj#5ylUiFU4ըh *-[VQJR5?^m3s.D-?jz?x]\Mr024o].A>=YM֧ifL~sBoU}_e~q2OExx={WOt ̺,5W''5KϕƳg_ZaMć_\\AMXʠLLyq&Rz=ҽya]ֲASbu˞]%6+0d̪H^Xƃb!H8t@eqie|-]stCcx^>pAaΕkI$'go3Cz(!Wt}sՙ,mƮiTi֢]y*8w %!p?}ьf_q/U3_SLoX:><47f_r2`+_'i %ٿ?rhUC)gЋ"Շ0wEzVO,/G}2YE}i~OfB1 m޿%|8e_gwqI`av #1W~8ܪsU-^ŧy*+xf: ^0?;7S-d>rd86.^M'Ac>؏MA8bqI9WZZ,s8*H;v.Pl|_a *5Z]^D$db;x%$|LX[{ZBTN*W&qO[B,c%a)X^89/}53rhJJ0T_9[ dG Y}/'ŭb#UAIRK*p陈T-3@|&LȘ7 к4sZ^N`fN>td(%"u$R%h0GA o(͑ 5LYB5V4a0_'-!,/kY޲gC=ހ-TБ0Nn@Y3 yOBTļN3`%^T۷" Qo##&tGRb0zŅNRܚHTR(, )Mp=Rۯku#Ahx^ ET O21DK#&ؓiІ$Ğ8Z!Ny"&;y",䉰{"Uy xh,1):rNiO,5QŠB\+ U#nB7YW5UtʷIBx7$hi d,Ҏ39p]$zKc o]ю$dW0-k9MЛNNC+"U]dyՆz-z߸)Ww6 /Wr~mH3J:JX*GDPѵS+Hs9֨9֨kb1$zP[m@e(ˢ Ib8 ֊P,[M&dǜ=jմEeKɰY[UYd%`UHZc^?&\j6m[i ]c.n>q}}Vk_|.bW1@٦641 TF;[s[e+)rb;v( _u)҂ Y ]\A+Di QJL3R]!`]]\Cm)th:]!JE{:B kS]` ]!\nJ+Dx Q0ooQt9g•Bt =]#]ITV;Vc]!ZNWrb /ݙ+Ǚhe?oDgVNBSy 椓UL <3~0\ϋFM̘<#+*(J0 7'8aFwo)pT7>q7x2?`` &fL҄=W_o;K^ Q\ ԙFo〼efuғp!TiuoTOv |?;1*A),{mH"sT% t ]݀h_Za1 ZX#EAt-)ǣb9\Е\~¶ 2Y ]!Z3/D)UOWGHWV*K+\G5u޺B_ >!2NW>>0]mk+vl1+ծCOZDWT1"x1tp)]+@I))I-25쥥aBtut֢ BCWWPZ ]!ZNWR1ҕLZ ]\K-]+D)YOWGHWRjBWΤ<D1r֜ؗ^Ø& ZJe"!mu?-tテ1K{7}ܸw *Q+L2:w/_9pzYCw>̮ε=\ ΁F^1Mp(W*JzUlC7qTs_ gUe]`4Xr LZ&}ّ]AQKRMj)` s2aBD-)iA+9lxB/@JQ+\)m.BFBWVoPvtezuɂ kf++T)th͡ImkzzbY]!`[]\ʋ]+DTOWGHWkIAt%%CWU#CEWʮtut%в+l)l֨Dtut%\tdBqKNq]]@nTͮ'SeAU L\)/-p&q8W]0l7,7̫}FI݋*1e]zo\p.@SfK[3p k0UO+~lfb١>*@$w(ݲF)8@I 3 ; 3ʑc7^Ӽ6nnO'᪺']8&ՙ1a ޵Ƒ_iivI7Y,fLcj'D34fEE^3.KNI9_==Mf?VgG@R+ymϋSA4P:iʹc0 +SޅKDcUYjO٢0ZGx( bب"!iNWO͹9 ƪ*U˝ֽ4 oH%%QlnbP9\VTpߨc[M1,ɇ%]DDNE[RHXHHI?a7 ҋJsh#JK}KU.R6R {KFC66WѓvzUYXO,)Vէ@cZu)jԍ&!%>{(R.Y ȒB4BF( ٥f ӘJ2](_ ,`%- OYdbm>)h-a\KΙ68AĹU X.:; gC3cMmLt%ρBIE`(. -c54ˆbwY55i0o\Ӫg*둔YlXkYgc Uh]%`}5 ɷ2 8KcK'[FpՈTb0VjXq21LhHpu $S\, |\R}S [PNj*$_tB2K&@@7-V(!Ȯh,a@L!ՠPwVrE 2nP(SP|k(!- @HhPED&TD;-Z!3֜A΂ŚE kG !.A 7)ؗGC%::P%@ yrPfdb'!\(VZ h2ÙRPHq ,s$eYj YVH( ;:BrFA굱ebU.k ǽ() E>R]`rTc^7r!*"pPB$Y,k x^! C9TU54/dPgm' 8Cw@{3qYdFŘ@Qc iDBM`ߏ!v> `Ⅹl4NkNllß׵*ՂUeWw mCka%a1-A7 /l#TYYL!Jrd@h2BCTVGdm>?߇~1gmG]I0(XִhD*ZΦ$v 䁀::AZx _3X-Df broٙ%3[#oߑڌ-m7w7]w,qzgU<C#%av6wz]n;MwRw¤O52+pcgpGoJGbz+Vdp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbW5\a9QuCFэo ngڼ:_s8Ȁ2֗[\cKߝ-ب:Eoy*!h'qIN$I8acWqQq6G,qʃu-<$Xr(6wK~Z_߶'vS!Xe?nHffkm %Cr L 1Ħ ma99Ců.㭟϶/W_\/ʫ~Og:{Amw;.ʯ7݇݅˻e6tVz7Q9P:7DNȶMNu'/7]ކާ(+v@H;i v@H;i v@H;i v@H;i v@H;i v@H;iU8sLZlx#M;v};6)*zwÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wb,.w.vp-|V7{{êr E+>"hah=?v@qOq9(C#תc=z(Y"] 9&=e\CBWmPӡs"Dt5;:p= ] ;]sC 'MU9[g߸g߶ 9h_]'^߲4\ ]2 +PŠ8eJ!KF>$ic DBSI|"ҝYKmWD~ Q>y~Rxߺ砃0ƌD>0j@ C)-*ӴR)y} ?iQ0z#廃wNF%<pDņ۰sqE6^'|6@`z\l'+bGk94O!UIy@ ɭթ~^]Z*miIUC+y!;j Ps+)vUPUC1Yez+:D&';Cw&FriQ*siLc1֭Ǒ;L(\l')|,@1phY8xHe,}ʔY!`B !(8?^׿DWV}+D]ϑ6tiYSB3thwBZt Bϋ.X;{<-RCWlˮgp;VvKxnhmI 3b;]OuzC+thwBFt YF]`*]!\BW=Un(².ڢ+nj7L;]!};G+}L 3ѝ`*}0({zt%m~iN1z͡Ҋ*B愿9gfNŶ]@6zŽ_^!Ar `J߀ph꫖;ߠKa;ae+th/#JezzteއoXL"#\MBW$2T_te!bCtmwAKigv!Z To]/Ξ>n7 y5 Dn(|=]ݻ빶^h2*ORܼD8yy0٨ ^ `ߟGѾ胚^6WH4_&ol5,6-5uCImR/~ U4~q[84чC}W TW|:=OkmwQKK#bͷk+s?ƾf;N!2|Ct|>-o˥Ҽ\NV; w8(V/"Ip̰"L aE*BNX|!?`xz5Nw?ρ뱧.BƊE\waY' ЍGNqSJছozjj|cp(|3`u}r6XݺYq.I㻛:vހ/S'Ӳ.vխІq =S=,S7^ޯU Ʌjo|Q7P cR<_kpyG65/+ͺ *sH:tTF[Yʞ^EwU][V{]v<;; rzčj0 K$OɟLUEWۚ* W{ѩ(o, cNE[; ĉk.3"ѴXx0BSg:H֚NUuy=1 x5H!5B 6 Q)H*"IƘ޵J TUQI?" 816j'gJ:vTg Bp&>nfxfx4[`ɇMWA*AkߌRL4b*qLR%yAj  Ȍ% ^%|RW}O<t8=ly5Q£>͝3FYӘK,DC43*) SM1|b̉9QB|C(}JA=x'ȘXSo-Zzokf?p NNQۑ  b% G^.n4|LoƓv!{m4Rhǹq.z#Ny&0-a"=ǩCy0ְwj2OGb. ^sy~MS8=TF p`8Ŵ>iuŭWǘ;R~?x89azz+TQp8x< 8LlxN`bp4i޲SNavK\ UJ'nVFA=6j<}MK&A8@ S'Omm~h06gA GL20)BXK e.'\Ŕ}r͑# =:ƶt M}&SZ𡸦Z@ګ@,X`iJ- )pJs*'"Mm$֘eSr(0o+g6du!1~xu՜d5^,ŵ'ͭJJxbLD*|̰&% 2& (-5ݻ,\I2GѶ#C) İĭ#):G(P-ݹiG ׌3e E[҄x ƞ+Y R\hgyv-fýEQq#6&TNL!u5$H)Jep'j)]"[iMM8VI[?@`)1VoZ')NMP ~/EQ F à!yZa1#_κqғo'< "*Ʉ'WO1%#)7&Ҡ cI2酉=qB{ךNki8եw{[3d=R ٻ޶$UwG`o.3{`0ȒGaU,٢ؔ,; 8&fUׯ(' )AKMT1(ShPI[{m62ǒ">ݲ-N L'Au$;$hi 0AQqBӎ3r4{uQH:hO<` HGkP0-KIVO!w_Mʪ2JPۥ(a'.iM(% 6Y|s^,l ˹ɼxxO-ب% ޗ+'Z2Ü]z 7-Tڒ|Y5/ =TI~oLA̷0}goJΡLi.]%b=h[1 B2*Z驎7J~m4\o=@_ X~tEQqub?BNbijgokO_ëAʴo:}Ͼ3K'9a~7ơ +c4Fb]090ǔAzC Nx޹BSGSm'5R69\oz1p[f Ů9f6D 57 l-VQK!+,E͈q;b}{=bM~-kgrIcB P@G ,ErVWg@O]\ nPHŞ ;r.ME&Ly4iDvU[Xu!6|uzJGsRF< ZZ2${o"m'NG~̡*Wx/? Ըs hu.N(ah鄘B82Xsu (.h7NJqaKCHNNx .јyhc9AST-c0QqNX'NRM3a.h۱z:n?j)1md(5:IeMI'J'C$DsZO6hB(eAzHTRZ\?9bBjKpc1<~cFB`sT;rl9ګ%+"N(7JVC ?f /څc4Q0)YyuٰSy]J`S&HLq<-?"V SвR^@t.(k^8G0V\ ]XbW#\#W-%Y™}mj-ztz9:]TŌD OIإvx Kˁ u@oPh޺gZ~Kn|7^/]gA\/A/ffW$hf*t,7mI=]6uÚ᪱UVyba0(ȑAz1ѓ>{ +#{] ƱYDNo;>C~lW*'bX;bFoyB~YoP.Bot쇷߿!ۻ~xF9{_o޿ΟprDl =_G|w?}EצE܈F`K!7{W|ʠ텥fO/ Ih׋nծGU o3*eЕA,③IV|&ETIB5*\j 1 Zu;V8f;wV~v$ I$J h.%O-j P T%eU)-RrѤ:;Œw|9ՖN#OBĥ+5ºR_xo9% :唵ݾN+:Gܥ5x!]tН V~Cw&iӍx)RW*  DE yA<\Oσh&xb2 )M5@V`5hF3cZZ0%AФ;^r5xO2qޣ/*5~{ne\hDsHj ypP)Cڡ(7ĥ31hLAGjY@Z#k +8Na 할F2B AЫHQ}m@:\>C֐ +kR Zu²aCW"Ϟ@hקi5Ws J&NEGPlSL"2B 4Jk-U!(V_T ^f@Onl0L{"ǜؽ Y켝޾PJ 7J)ϧmh xIuƔbdsYSD{GS% 钆LGfL U :soY/@45,=[цm&lʨbWtYV ѾSrۧ˧O߂h(l)qQ.Et6ȒNbV8jIg9v[(/\Y)Z|[rO|q`sżH !%LDLi}`hG%TZ* Gab< 39s~>,`5X/s >ɚErr&G|oYyjoRZU'6a?ڦAVe|"mhxZe>URHhP'Wykt>h B |(R}0MS "eNns&Vـ6AR|C? ]> 7+EכԵ2N.3UdTTD|:6}7E() E{L ex6]vZMySj:205J W]߿uYy*Pf/\~\Op-_\v|,5NDn%m2DIߔ#Moh{=R*uezq:{^lsOR> e-9m{7o^i}7cޅq}*Y?zϋޒ[pk[-Ys1E]5՚޴9P/~y -W9+꼁O dD پ7p.@VNJ ʋZ ')!҆J'I̩ /J#*ef"D#m>DSc4W'KR +dt4y&!F{USI֕gK\_lBOapH TLACA( Sx bS hҮky^0Twj7mR2]Y;-D+K"Yf޸镐,UԉF%ѕRRZ"I[;ЁCMկ,J-#YFkd6Ǒ+T|ߘ0E(S#eCJ_F_?ρ+^ +^Ⱥ:QmyuܶVXxI^N˟0-pjZQq.~/UO4fQ/yo`wQuƃ"2r0,cQ-ge}Yo垾+:)@l?rf˫=3Mҕƞ;F@t/s_Y&&Rr})Uu1R L9_K`#{O=ĸn/Ǘly޸`oSsG2t(cQq# ~T?̐5W0"3E=B(֟IVA|vKӧՠ!|>-f:|3|Z7hKZ2=VixWvP`'>,PVTx4 <y K&8ZKo$A0tEW;CzXZI\VJD{Eh.gj{"9_Atu|Vw:N/볬|83i? CC$P=%0]EuTUē b kZ_{7@6<&˖Y9syupXS-Fsy<ԂN{4.0qp  7؂q`re0:Omb&=}7znlP %w* &`eJ(DR8 X sڸ8U]fdiHl?:Ҕ5S'((1Lf+K*'keV =ONe 8abeAԻY/sbްAhi]e YNYofS~k_hkvEwj pHX8q18"qҬZL6[Qyf SI&!*]Ps̪e}=%]Yg*9jIȕ HRꍌyqnX3v Äϊ7 ]~lx-k@e℁NN>.qF?.y]]Yc(xY̡TJάk-0"C| }ES<&(ke 6$h̡S(TJ!n<˜N jw 'xKT6b]Q\F 2(j6o%$ĚS&-"|It`E! 3ZkrQcBwn_,dŒWlpFEEMk igűa73nx ֱ^LS[7~|Տޢx>px&pn 'zG AHn|2x1*SMS_Z8:jtuՈꌫqAjb)]U:8g!aDSDh⤸p0ܺeMb<TP&YlY'1_}%q;_]/3}r]pC.LJr?u`YRʠA4H^|gGMľ5ީs<׉;,pv@F",\tQ\5uI%uLDULZQWޙ$dA%LRP/[$ Ps)1uI'hYxa:nlHu8aaN7k%I4K}HZ0oqd;tum&9X94rHFDGŀ6QVfU b(<'v!Z06[#&2sNV4QޓrOdLxhpk4BIj)ڐ ][!XPRڦ!k*\uP2|)c"' lJC|du U=%Su1h\VWsfi4SeeJF ND *6]QJhI`׶>L16(-k]ЍyMl%IMI,=*/;37})/z^_ oF>{Wܵ١U6UWoޫ=+18jaWZ;zjV:-•av5@8ͥXKWKN߿;qM΅E.^z|;e促..fy٪;!D\- baFftU#p ŏqE&`H^烳ӲCe9o6~޿}h~ \>Kl@yM>4s?v|~.Z.7Z2o `?/spiKCQ+-'y,BC'xQk8yx6{+ݦ[ޓެVnz=.m}Voy<#%$\mq_Xk]ɉ6IɽA%d۫ܡ'$z7pլ qp%V&zpMPnؕH|p5~_YGϮaʓEB=fjF/pլE;v+A [zʇ_(z53^y5uqVڑ-+;(}'gkt1kn)׊GyQ2F>l+gkɯІpp4l@ܠ}-|P W~K`x~o5gg?2k;U;Awms_DzG_K>US\e M*C&VHY*jʑuVdZ vħ8"]7t$:,\j3Hv:voʕGetC?XhTumvV+GTŏ^Q`&^-Yy/]爛 /آV 9Bb`Aq+ .+.qϳ9[%!S }A/, KHXטt InQ͜#N eݴIۮxP5Ţf]1|o}G( p+;US6J~Rrrrɖ)3Ż,RJf s"fk r9 ft!4}`;n❻J-lki՝Ɋcm5 ˍ?Ow}r9EҧT7pDI,w&fm+#Aa\8^Y17?i%C5}:ִF?Cm9ӄ}Gzm~hw< )YIzvgՎΌ{l.!_uY[^_k#5m^K dnW-J6ŶU gr//tEι?/qr?A\Y=ޯ|}_Nnq3|S oW3v-u\rNݶݗsd&κ/<2q9q2miʴ䬇 AdY[\4)-9 B0UȩAŶL{'z'ˆz'8z':V9d,! ULes:g\&3Wq,tUWrrqutzkwGCr 9q5:YŔ^%U5YX|B⟖.YNA VS62VZF #$C`pV,2Jsw3 b] ck{0X"7IO駡utu1}zSveҮ]A) Vj_f.?[wAʘ04(4X5rj i+(חqR|><:x:gpxwݡtpKdiZI^@6sMZ]wչ ro|[*3wҸK_0&bg{br# BEiQ{orE%,hAGmAKv$ÏF=w{mDMz>B;IW0\?6ْU_n:0.n[l][ D)(峪L,e_5Ԏ=+7acqp+#1=Z(Zz_4k{ׂfZM] ^k}Gefzo;nɩ;Q>8 c4%AU}RlqRpOQpR _Z)8Ͷ,Q;SqP.A!ϲf]moIr+?7VT-r dKY,%4%ey}TIzHK-kd3aLOT=QIhc;ƋdN}骍Jk3=(c( mbLV"$>.kf6 ){[w+GH=~LeN× H"5c6ͩO |Ilrf%߉*OAtB9R23 \Mua]U0nm/C b7 m. U@)NJި0ʺ,#v碂.x!Jaj+BQ~QhDXwX1YLHh'F25J & S-LQԞmgRdᴫ!.0Mdf(H=[v&-*ٺ\+sέb6C`k$sBrĢFށ3jPF`)8+cy"»dHbjBiL*^HZTIh\CsBsERPRdx MD1vz#l&ٱcQR+W&}M" +<EAb  GPcꆑ\G؃RBVJEH%.`H"X7؀|lോn]:,mZLճe4OkvJݯ<X ɂ EtSbߥ߱`V3ͦW}n{&-̰D٢8xFWh2?iLn~orSG~657_|)I\_22-۾B&tWkہ FevdZ/ô/{wvbF[՗;7uJΛP2Н;{ڀuB|1Qv$tEI+ʨA%G~d]_;^=jsIkE(OY8k̫#&I:$RCwȵp#+YI_[]wm-cf?zm~z/D&m:G\S8¯;?u [T$gtΙ(.ECN.$- GjL"v~'5U'J*{9G[!$49Bڰc{ցtTG(UD/|2aR$UXPƒb1d]REgs!{HBd-|_OSf2l D -\, јlm4A&iJd/Im m2>ekeaGe_ʹ]∵O8jofɣ4gok>Amn~t <&/z>f߁3`s=--MQPvp`ijqVII+ ?¦<3eCRFjֻXRuzEKC:,!)Z4 Tkdlf͸J3,l621  ,v2=;e鄫O l8~/!*4V64A(TyGG!&B)$Pc}^PJ蔔V6uBT@W8>-ηVjeaN9CAfcD ׆ģ^^ 7zmx] |3Mt1JNjc*%Gْ`r̵'YY[a>,]vSl<\cfS G $ 1'䳕 8JŪ.QFt㊌ ywE:8۟Çt!6'ׯrMHM^*HMGu7{q4yʋ%'@؍`~f_ROqoŔ7;_,?=`t*iiÓUJ>[ϧY :A./zˬlYI(Y?S0*36|fyMroHyCe "FDG=wnPu}nzZgsSGņ ]_w֛Omgm`{'?Ň| &ß*XhR&|)*GϬW/Ɇ㛿?s`m+|u;~~i%'QyՑ+EKk(mm6(L W`U N >bL%tֵԆٚ&f"($W\-P,JddQ)9Q:q' 93Bw7z@ G{=dSHԿ{="Z.#=\\λ{yB; : uPq(#Fɦ|J둭\w|ޞ#Er_=hO)4$$%aJvILv6D1Ro1RDLVDmE4Z gPL*ZT -R"(^(l>"Bcr̜#5)|S?) _?R&liڣ9`b{/MOl][Ojd%|9 ljF;mt[eO A"IGKgFƐEql[YΌ-HGG=T s(yKY&hQ^H¤tP͈am IL@#e9&*$]V%:\3sk;OS}E|5s=n+ʁ(S;8H?sQG%Vr`nzo7\MOa'K*3>3EKtkq,'Ӌm^~~'TN;ӓēuɥg~ {r&bEJSl0T|+#҆q. Q8A7GxR##e4hi дb1 ʳH#ƌ R˂MO5?p|#i:;#Ec\)ba#`0  LJXsfkYHxhoKz*UV}pfs$9˦E {XEdïe(.OeJ7%q_~navD@'8["v>A@ QBÕPW0y}m(85Y ˥C\Ƽ !2^Y\'> Ť=l;?>3Y1/&6*{ z}.:>>\]-+AXB=4"T6 La)$T7Le+-ΪSx ,T+ oel  :se8_\V=Kg$hiRɥ)];kP%)1ULF2ˬr bC1,u`k:zٿOkÍT ꬑMT nS{IH"́}~\zt +f)|{8V1;Z,T{^*'~^/˙__;w'o߿K?Oߜ7N1Q':9?}f&k>|܀j4Л=L 6[]wc6@pҡZ g7p3[,W=9~2.|gdU}u A%VQq%YrWT! ,̀oFzjKl'p?)D0 aq:]k$ǐaiI*zIOp;~t#G&*pZ1mF0JZO7i{mZw:UQk<9&w՝?VSKy9gj07tqߍ;`+6x:|z[H^r# $g|;AfxwHcziJ|m? $%[h߆]5*Cz Y@SJx-`l] jQ*יۣ2,;oӁ/ Y:e_/CM4~CimliRgK1Z4/ A{vv75.ʮj7zLAYslPKX5P Ӛi:1SigR=IE9껙T+r?-R!rFAg `hn)1H-h;SjSjA*ܹeUp3LB.EHIj/|Gґ" @]WnGÿ-^mS+žËz٘|?|f2 L{fX=)Bgq+p(N}'omEϭ[S5\鶲hepbŮx:kοy&AM]F☡&-۝j/?6잍M<ဌ ~0H (s6,%X7;PE>xо v@ ÿPBr@Ih7H`NT&vhJh޻9 ܜRW RW$DﻺJ( TWZ$) 19g0eD}BQ2?ry MǮ6\zFu\j3 nR홺+ީmAFF败\v0*jUBU^"3|HUWCQW %GSW/P]Q)f`UUBž+*ºSW/P]1%Ƈ &`}*z (|mas#S&e[Mw޾a xy<YʓYJr-c1 E\0\/5q4ݝº\]rw`y ><`sm:9Ncʠveu7mg; 2c A^QV0#)6gF\TDcrGq$R]ƮvW +ݕJaw_@8aML*͞4xkp<zJq(V}.eBXru]rWU*w#A[oVO={o hz=ř[Q.I2%Aʈfe4anσ71[u zD-湰4mߎ<ݭ|32p\t$ -œy KτQ`T"4U (W`7IlB O=a*5 Rk,5e58w G?7+[YyKrRz gv9v_#4O7>]zP /eխ¤3c\*AqZr hQekaLtkJ@TI-2db E1fnp4cy wD@4:&NF03,*͸Jbp3 AP$@2񰃵6ǒq/B(v &`_ª[XQg"80/:a4xAyA)VF* \%Fq0s? <ǎ۟jjz )O@jdB-#fWʨKzLRnkSg_WWDqd0go_0 ?0W\F S&ϲwg)aOՕ7eɲq hW[eճT/zv)@V|NԽKS̻v &JX7Rb|=&2LU|-eV91: z0bɇfUr_CR%~F6Qml+EhNuba&!0i8*dLǠb"GaKxe%>=wo>?u~o`z`\ʈm>|܀j4Л=L 6[]wc6@pҡZ g7vs2NGZlY̙ѕu3?{`_ͷ %VQq%YrWT! ,̀rVCzC/H7I.'f>!#RKb v#QhivXd1wKsp<7;Xa<]ݵgw696iL4,VL'(DV3S g:Q6uVua2ޑ}mkh'Vb 葑U;*o49oN 젥E[Q K;NyԢrc*ϭF5fU<;Dȥ묁QӅxbbrT8(*LUsI h"O)I5Lڐg-`XNa,SՇʳ3Ms}N$i&PMH6GNf[rWƂ &)¦d(gUH}I5S%^I:>f]!1wEA3df@Nӛ+BpxMAX,~P!qA8Mm ^ftߧih}ni|Up`v\[' ս涪όEõI'$De{t dn80@@\(<7$ @|4 ɄcJb t,Kf"O`I 2Mٶ!21c8-ljmI HG?YUiML$:2 v8fagHjƚxf][|^$8@3Ak ͡UL4 ,Ѓ9}|z PI^ &"&#sCIINQvd.jv@9SmoE04HOʣNVxA30G Ir X)BZBi哖N(!c,Rjiʨ%FP)1#ft?W3BooIe'JŻۋ5*zr=#ݮxr PI>J8gK e 39`nk)S5c BtL>($}($"UI<6fGq+mVR:bu:-5X%StA厸b,Yzy6%Wwc:v7 ԙS`2YC2LҜ$*%C8DOx,'X[ eSQ&, ^fWBqщ#P=H*Ս;ĈXk=@1-T:)dA%@.n&N$RV.<$օnM$b%!B?[m8j㥜`8#=w~KK뛯~!BB%M:hnuMΦi9Aծ` urV vѻӘa6_qE7itg$hy!们nuE6։q_ rNpvib S"dC e=sy ܒ"a(m3ulӥ׎h˞%#5Ll郷4Y8Wzf_gxU#[UrKjo}s S5ntO6TjĶ})F3w_d碫1{/a.H#d m=7W\ymn}[ݝCta[{.D\vg-7Eksz3/er_G(J& *DKuVPp6|y{1tbaF=nJAiݖ_܎x{6r\i{ϛEZ*B'#qgs5Dps;jŶ<6 W0":i%H&f ʫ`"8k:fUB>= Wk郩Cpu.@;0g a$w2|O^3ޒY}!WbM.ΔRt ɭXrZ!y 9Ur4jPէAL4^ ?nTCcHV!×_Em!q|Ӷ>+ү!Zh7ַ퀴΄4wN iήǸDY1o@뒴 }`7@] L .L'H#!i ^V^i^$H 42*:EL<(.Pd1*Cі@a\9Y[ɃaJHhWQ&P#?‚uoV7T|?!,C>=m'G=D%O{=kE@5H2pB%CF):$IuȞ[T$OT A gޕ o}.JHr=|@ٗ#++B 9֞8=c;6ӌc}k O 7n*2޲ck~@YZx8__n_1l)wme)}@e>;,c C(qBQjnѵmUj @*Yݦ.Y]8+qԁ- OηnJ'y-n=9BFm*D. νJo3dLuOvD_rjGG_!2TB tk2d%J,GQt.&DA5غJ3q._/~Cc8x]oIAPYHO)#R  8RǨRs+Jb+pZg Z$a(I'tn=b3q{H/Ζ}u6Ӓ#mz_6<Ϣ9dBF&*NȬad  2DOiDZP48xְ‹XUL㻊3U?ZOѥJhx9#L{7)q y֢Q1pƥB<aVϱߩ{N{G{DA D˃   . OR:cU WI )ۦZC!Kу@Y cL|F61[UZV@3}.}*&8\?{ Jͪdv f\}=[=fg xLx9/us:'?~8KMOX#\i9 g0t Ͽӽy$_;jT 4ygp,,̀y&Qs##h%6c,dp)LfO)'.D"'P2$] kkd&\YƸ8.8oǗ뚃 %uuǷXkfT%Lt^n`\>_qd=! _Tc.XdPٓudJP`T2Tѵ-$e4Ή($ 4D*AQ2XR(9z8IRn)Rŋbr1cV,}*599HAa(RN.kJVm:!T==Η 9 !a$C५-E|C0s/+49^ꗔGUlu٨O6##s 2[!@f,q`61j둍M1~*ݚۆjpG2hH)Ҽy:0Of5jczH]| OQz4J,rEQ S*C/i{ԴX<m!ereS Odc6JJmep[S?5p2(cTĤB$VjB(ha.Vk$OD~Һ[e3q#KlCi fgV:0,;?C$Z& f0Ĭa1*tROkT]v~M7:7Б =K:xu=$>\a0MiZ׿Ӥ .٪$ϔN2IVewv͙p!7͟ttE}t1 ] |η.?cӒdݓCӔw_懺Om/nq׮}<;Wgn煉/嗿͢鈴ѨN#0Sڸit%h.Gz/XVfU;2"w^~~-w]Nng =W:fFf;CX.5' Z.=J^;R2/;CZx8R^Wbگ*2y;]3l!&n8+zX/8w[ema 1S칇N6ef}Zhx]P@_-`6dutG&NDEa%Q q4ae/~y횋B{zMWG@<zr9y<EfEH~k1́YcaFFCz,g7i%[{N{ {U obđ' iOOFfѼ pŅ+{9qB\j,P{Q,?$ƙ9Q"$0d6YQALJQ]&cR6=1Y K6 F+F¾ F(sh(4Ljk-({xRI8RTog[ JMT%r|ָRb L. 47ڽigLЪEq5f HJ`DT>.)"gQABtGB2ʢP4_DOrJ5謍dBg{߁=~EPݶ'$ݑ*d3JQ(RҪZH4 EqP}"Kނ,Rg%\!ΠJQ4G`$oMp*( delVlA $0pܦV)[I0T2c"G5 _T"3g4R;eBT1Dpy G^M&vzXW{g[7_3䣮 R6:,$,J.ۜl1 &Jv*m;"ٗzۛ|g{8QMF #A QT^TxvZBK] 2d Y ?Xctu/ڱ7 U;ܞ&݌'KQK v;ypHMlZh(kFaph%k!5bz{";8]uJpX_te;ЕjסB  + [+FޓPچJI)j%}+F uut4DW$ոf_j4 dP5x C H/T;A "޺"F@WGHWF6;hoƺ"Jb(*ּ/Cghe4R2gFXsܘ;m,Z8-3XHFzO_qtsu(79=rLb)>Q(I"ۍTۣqw;*>~]9$zLdU'$ /y ̧ey;nJP5'0g-pqID?eR IM>_&*hq2He TK皥 |z=񹤱9AJ1ˆ1OVPА@fM7d7ZVF`7݀ʺveWbbNWrm xêZ]`%qdHF{΋Qk%j0fz ]Zz(axEn9x"48GQ7Ћnp ;@UG=+@W8ծC/5+ll=in(w57+N!b+-B#쉮u(JHb15[+Bk;]JftuU'RU7Pktu -V7CW WVт;]1J#:Beڻ>xnplu( PJ5DW Y 2\g[+B;]1J]!]-jn=]1J(~/đ|8Da|Co Z Z,>&h%T3;G [_Ge Դ{pΗG??R(OrABjJ{^%k_BoIgS׊z~#LW)܆8-HwtB\ow ˪ ZCW/M[ +4Y?+\4j';MsnQv/"*ʤD$(1OgeSpvqy5L`>a!&u$i?w3<ܺ!aVƹܟ^=1/yv/A.qHtFfȨLT내I t9'/&/|A|}Mf4ܔ|nǘAuu3\\V ^V=W#n@}0ZetB Gr|J.sXֆkѫ"HW>.!i-B4) JY`PQdҮ@c'ݦΖFijM4$B6 X$Ә+3" 5A*H_ɉ:DmD!Kr`K *'\tҗit̉K"5#{R,CM݈ k*LMT1)G-V#YUS4:%e 7EB(BTI'I4{e Yz B*e%I3٘$otLɢ (C3Z ^hI T^_82# Q !cHmDFv&iRCdFoW u.BMZetȵ:W lM4JRY<ң$D 1^xؑ/r{= ).݉@qLdֶڔ^* QT3j,I:'͋Eէ (хk§tp}E DnJ}^$\ Pt=+ )D!; x{j%x!,;v4#H2]0_@36XRNiE +1ؽ*dx4shRs3`BI dr,j]v?2hFg-K7 5[]J38PIр 5Z0+3\iu *a mU5VX4:dZ{΂6O"*X*JvdZ2ɷJ{0'Uʘx.@1?܂n]yt_V 5ȤuՅ:v:7tLt#+PZ"hFyܶe8Tx*v>X?1&#TϺێSQ`,C)9iD*ZuIBwp 䁀:t kT([|`c"m!I9MH#{Nw8=/ (DhR64#bpP!(Q^":(5\Qr#"B[u7EA,WRw]zʠUbF=]:w3רCQ*J 2Y*l@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J?!),(`n<;`+,MX9*Z@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X |@5;$%^up@z(*+`a%ПC ĩP {!`%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V +-wP}fk֣2N>jxJ[ +V'D@ןe%н@ڙ Y J/U |O}txJ/8@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V(>ZZG?,h)z^7o7]Z;\ ᐄKGd)`KF =yC,\z¥^"qx@pW(v>~ pkO`•2Zw@pRWdWd3O`!\i+ARHy0ps9S,`Q2\=CH`f "}=u"+fzpemTJ\`-gUî4箞%\9r={lLh~>ڙG[W>{zhi|]U3e=pf믎.G+z *+ho lhh5/b|I \p*G rG_62<\iyYjd/f_] -0f/+ZcVph9.=_9vq}8w~xcQ5͌b3_rN:,~{^[`0mjTwD5JIf0eEl@u֤.!u1b;Α$`̕O~cYi-gvN>``WdOJs|tWgckYS+R X*z!@OWd;bPO~TY%w+ugа+ ?uGbEn7Ƌ/Oަxr=bk/s8dPY]>_Jot9'4g!6ESQ09Q/shl#ll,9wѿ}n#q; =Wqx- w~@m| ouI1M8riK*dkFFBy Qu!4 }I pCo]/ѡ7kIn{ynG.#fm|I;YiS;eyC[ jtCoճ[XtU/)1篹z vyu`!IwR*a`"z:.^ʬыr}npjcW|q9vͲry6e߫juj⩾E;cߖp!Va|mսuÃ޻ggo{w|<=9=yA D'r-у*kJH^ ml:ЈawΘ>94.ۛ6EzwMd8}iq?AjW/ /mY\}c^%s>Ldɼ̐\+Џ{Y_ܝ~c<޺~ۇ *g:2ujT̴XkҡhIVO4Ũj5]2d5@,Kq-tXJ b{%$Gf"$cV ԣoŻ1/4zzZvwmT/ /^Hyͧ=?;W*}Oeѩ.ZBDt-¶.$WZ魘hUt8okg(gptTś"]}9Hts]/܏c7~|s~ֶ-hh iW~k~>j5hSoU^`G;\9w?;)X,ut٧ܳ{\i[Li8tI[|zUb5xz܃[؜̖nANNcC-1lNU)II7MHj&3C@z25ǵ3>4(>8?m6?7(?Y?{WȎcy LD,/}Trm1>[vYU-PR2)Pisztd-d7<qABrQe6nOZ0ڹ9uБV))9:mg6&Sw8h@E!AU<+4B_D QziH^ v|QRa0R؄5qHY6AѹUEE>Gю8!c+tJ@7[zLVvҐ5} ];|Y&8BW>ݡ' E|=zZ=`ԡuySĚhGtش}9/&SMz-Y#Dkl,^H \a. } M$( \a":"nytCȭg&;˯K!~vJoO=n.~XYy/ &kvdBYbAU`J1e ^+,L%_1u1ҲgsGۙcJXXc*M5blvn'q;㛫ۺ)ڛկlB&7p9+FfJҦ--&cke2Q [7`b7ٓLqdsK5P#BLP b$L-eF6U,tVۚXL#&#yH('`7H"O;"bn!!58AƉ@;;B r>β[ǧgpnC(}--y1 ]dBg3l!w t ]=˕s!m8X"k?yEbwi>ۋr?M#?b?s<;rGgMoo~\_]nbvY8QXp lӳ>DŽw}ț]ٝ:qInWݙ=^Stymo!6՝' h۬RpmmQda1z8ލ\9PF2rXx~ɗeHym\9[]}ּR;&^z"}Gxt[WCMC{zUāNu;0We}Ie}S L*JT9gQ)A4I 8Q$kC#:B:v YS4r`nV;52kKTiza R_7Pס՟v1{Ӫ[3S3ӇeQb/2rWh,yJ&O^I^x,p6:&P jktk/X+{){ٷۋ/ӹ1Z|ZD\m5l^&aǓvvuM"5rU].F[~F]>uX9E" u2 ǂY69ϔc$o|9 :gL;:`=.q1hvQE~!y?~[ wlzw_ӏӫo Mݞ^vσpC8Vn.qF aLXJrU;6GHJ{gSX`{ ^6T~y1SwǶGJP/xe]ZdXKWc9W"<>\6bz G.d,9 (koQkXHman8ζ0AA/ oH0]\ܳnj-~N&W&wnbyT2&J"j,P*3j-m:I6u< 蜲 #֕M.D vhmMbGxx*8Pf`:vXx 1cr q3! ~3EܚJ&-S/֤ti,;q+{ = 0JIMy#<ۤKR2AK|2xjXylQzoN-:4"` 2ūd MؘxBB_kHxBd(U'Y9tєZlue@p¶Ul:Q%{[{^QjaO(CO+v?s[mlj(KiX<QDaetIg3grJsM19Q'0&^~1]Hѵ$B fEF SCSlʏP &*ofbSa"(6֡hBk|dM UdJHTӒ`/acntJ:mQhn(9:|5 ]QJ`I ײ n#{ ]~w$toj-Π:@(hw`gM3'if>;f(* aQ̵'׿ga1uzNMBNM'QeM9qSp,xʱ1Mp -b;qA1VB*!.sgz8_$‘8Y#vp^b}Q!m+߷zfx8E$Ң_vWwWUWUW}MQ WW`i\Lj8RezČqν" EXgb˙Q S"˱FX7ß?C8ee44 n4(3S(ƟUJ > >|[^C*~ppuGc{S9킝Nu{`w@ĊN0,:[:Hh ׉>PÐ }J!H}~(~הT_tCDukS{&[;BDL#*QgJ4<4frd+5?. ]0wunOz9^{P*4}#Yqxʝi{WWWKВ+/5nb%1YMMjRFE-&R65ɴɹ\̭2_~!)y))mJ| 4]@_jzpoy<)s ڜ6|qUkP!f߻P}>wG\w`2?2 7!AQͽ}ZwovԢw_|d;2& Tu?{߃?m ob%MTu)KrS{&nqaăyݐŜӜh#Jgnnӭ/폢3#qB j&(Cf%`xMCݼf.+/#_yn;+yhu:z.ґDXx&kzo=uC:XRb)CxiÁyPd{0$ 0l2x*uD[/J+PR2pDBJAY&D4 2#$ 86*o[o&EYK7F9  ޘd_.o(mB0AҔңFQsɱIu#bE%cfDUB}VC˽&[Wښô4RT%o  KHڠpAuO .dشAKB'"^~ `+;G"OQ/Lhir9+Y)Wb6)uK8](x%,w, 1x$ [ɹ HX͵{hS|/_+3? =#!XVLkq M{K0+x!v p-eZn&hw~#>&M:`xɨg^pA9($;ѧk8Jy"$p-S~8ZQ[uOu7^k7ﷀWіFΘcX)0acHcw"pqg ^Y#b/mED0)kE>ٲMuWVLF=Y;XPN`AkG 1)Q/G{hOx#B>AJCn"N-);=IT9π1g/_>uagA.O~Uʀ2 EbBL CHRbqRbp᭝z4qRC*+O{2 VB`f}yH)NbX* qP^PJi8cQ sSmM+zR##e4hi Yk1lFra/H- :YBHA ?%N k`g~/XYe)ta@d?:cH/%q( ?CY6LQozQ8[FEsv#7,/$ ܁A#L9Bg2mLw}X@ ^g@0!JX.%*mrUGI 20?;M|,>Ogׁg&뇯es+Sz`f'핺3+AXvFЭt8i}F1簅̔*ˆ귳^_0Փwyɼq :%|c{v^,ճ ԽsӟNm/BM=a{l4wCEc7ЇQ0BD>]cWu2Ȧ^ nZvHH< C --UFUc~8sݫs`|ۓ7ަ?>ON_x:=װf`\ʈm"AI}3еjko5U0js/ks gS:]7YKs-\:@s=n2OU}U9_"su%U?/RdF Q`AgtoM ;}{̛$I!O㈃)RX]-$8 yEl"F$YƫOzDm4.iLLPTਵb8IF!z` RX<)ZFh?i\g]Zeӝ/D1óĬڧ7E)qAZ1EXPt *+.5uꍵTzZxx%TBgھyMFًbNe `p09sH6G‰uXF,0ޯ*,IQN)93Z.# D`C\vœKJ\r2{r/ǓKף]9uȕ;ZaU뽺zJjkSu~ږr5⻢DmJTuՕҊ3C H쌺J䒝LnJT WWߍ"+.=9D`]]=`B; a>7èd]=JeYK/3  RbYٕ-aIA':?:4~>j_1IQ.9 \c4)جĭC ەM+l]RҤn~>~_~?񚤣=gdsٳHrQ}tjʄ4B@RDsXbwa-q`UnwUVnbb0;_/_NN|9Q+Ǘ/0 7^%T_o,7>۱ǽR|]c3>E<:i v!&!51$^acD lXwmm K‘8ۛ@U#䒔?sH#QH1`QttWUWTF^FcD2Y+C?HMrhA #(H8H h1ͭq=H,6C  JP8#:-g˺fgy*:x jk郂O{:9ˈ'XN%(t>rϩ M5\hQE4/Wl)N1|Y-4:%DDpc#[k:`V)'v i@,ywtBw@ۻ-(jbO$Tq|Mf'J {+ $wZ2/cv-)|y;Өz:iCԃFFZbj+]96(nf ^:ӊIq';Mq >^G`M07H@&%bRw\X2tRߢ  }!Eja8$c.ena{ْԆO\5\-,,r5s%ᡧ~$)n~|҉P ^qrZ^8{k.e?IfdEeq>=N 婜LNn, >ABOa \q1&pYBࡆGbNHg'?*ª2yɵ|Fjobeks16wX#x0)9k3F*D@`3n^};t/:iwv!CT Z3TY*3ƒRsݦ]q"?YT3թ³Qɝ7<b'1ٶM]jSoZG-&R6uYb%]MHWȗ.oH`Jg nk \(@r[8t8nxPZc.ӟlxnflT>4k>-)GlfHg =lS;`*gӉd:IY*0sE'e4MO?N߆𷥛נ"&Q/Vڇ|V]Q`g}xӏ h%_nRZBN?ޕ#!&=dy1}d"]\>{Mo,N*6BQ\k}wl65u>2Iv{6 TWg ݄1E&jQtbcsy@u޼n~-h#݅ W3W`nxF]~ݕ(LCG``ci2MQ* \㖀d4UVݛv޴.+/rþsH ߏ ejRr Ȉ``I50G#dpUZp7.>~JJpp^PҠmPV# $02#$ ;Ӳ9wI筆oach]:| ڭ`uo!}twuL/w6UG;I 0KM: ,V[ٲ%(v"#CXIA,kE,('0m(NEƣN<1sHGktC}AЋEXA^d (ץ (!Nꄄ.3%j=٤^ LqC4^%|<)o6r>S;k{cR"sْz`-8ҽ2 a8xuV6O5SR^yU\xq<8Z7ζCaAWĜI'R}5k`I099:5Ԏ9Ci$Gb|HMÐaJPìې &BG 0SO'zWccqTn~ȦQ"ubۡ<ܥ/b"ٴ ~b2bЩx_vO˙gyϷ^>|bwF/;D;}v ,K)fM$h`~yǣ1jkh*v`Մ2.kƽ)>\_OnDiBĿ.@Oo= ՞ ^q'>-Y WxMDe)KrE; Q`AgnvP>ҝ~~B3SܥhZHq(44N;EHWѻOzCH*pj_S֊i$5}URj&xJaL|hRΡU,PLB1NB@Hx92r<ýh&r}T4(9KpAJކ@;-%Io eJt@"|rWڴ9<%6HMD[+4v\,Ts 2$ csJA3W*A.Q1䄥 )w+aF)Gʍ^ywۨD9. 0ʼnI+V 6 aPRIwgzH_!] 9!g_bQ}KVoS=$%‹8ƀm3*BbKN^9;!:ϗD#M&*Llu[&d%.Hq%cQ9@_W+1rYR뛾S4SSՃ;kL!<3#iA.Vh(@dƜݴ(!RXa5%ght)!GC#8 ƣ4[#'𞼑R (o dIo#D4Yi"玗Vl'5KY$(\^OlSd"&nё$QrQnn4q`GGbG(ώW7U}f |K}11 (Qruˤi 0ȃ=KQ"xiJ/FVu"CҨL&)>bQ$]%9bD0 ζ]PHωcBb(ZآDɣ|"CpW:ϦDNP(ўaصC8p6#ͼRL2 kF VweQeYӓ@}D}ע}mK4 ,%1'̗ 4r PkR f,Jjtܴ=c9/ʑrxVڑ7泎Vx@390Jq$b V'4eRPrZӦ\!e,V$!J2WYCcLFvu=hpu=nJ-oO=a~9R9%E^:rx5SqMv%ֶ4'SR`nk)5 BXFD%dҗ9)M ɨJ`6g?ɳ^_O7e]J6`,orO9bPn\9E} l9-jG BB$] :黙8 [F锲2蒝Ѣֺ`=jJHq1(OVt*ع[>L4ѯq2Im'?w̯<ӤSwκA:.'<_@NN݊n[n5w46qxnvcu`Pf/|OclJd&Lg ν STO-X;wutNDKWp7ЉFFb{WJ7oz[c_ rZp62a+F뼼i\ybcwiD>ܻͼ|nY+יGˌڷGۚ=1#5n˂]$h:[bPp[yzﳛ|Yqe֑*&^u Y;^z>ű(lJmlaWXݚ͗W&cK/ٌ_f7_d墷l/1fwA:љl_v}t廓_|Qd+Yj(gG7hƼ _rE$A׸ <a]鯷H8;%#q| !9:t8i;NMy>g턹ؿˮ.peYud4 h4Ƴa dhxUyy&'|cq] }^ BMbECL%;*P Q'mC4 Րʳ83-\}w=DwdwN%g\^o/1%UsD:$#l:H|\sm~˺jw{asW]Z~iT͞}e^b'e]廂W \WLk9UZ^JiP VBD*m9,Wm& kD  33C@9a{k?Y{s L9|)D~__my%l^ wecMrwuSdq>ܯy?Ӂ{S e۠[ =NCPڢp%cR'֪T@i"qP^ฮ9M"BFWkc?`q9RE]1MgpI FP"M@ΔRiSa5%EԼ2rYf[vL6n\VG7CSHn]/E|E0>Ŵii^M1!>aȄbT  7.-m{i@rAX1lD)eQĬ#PkaI9c`nk@4_?,҇dY>zs=*x JjwJ2p4;i62HahړkH)$ŏ9w%o}*JH}ƥA&ɗȕVFc!jَJ3,lbknƒbὂj7+3^t~pPﳸhßbby!|rhS ̍Pc}6Mݼ*Cu0IBJVTEk'!5^Ej dK̅G[7uk%vIZq1ʹc@NR&b&@܋\Sc@m-!ٗ[7@!MT` "6ӏ}#" 8 &1 UbQJTA)E Ԉm%!deiJ@q8-KřYQHM:cMpkَ_3@8[S쉋\pqƣM53;#YAe2H)/!h̽L6E8Ap/xL;C~xxka$_UMIpC㑲p}c7tq6'н@w>g\:fu,J%=:wuy~. #eQJg 9%'rp*NmCPOu)}Y$L(clA*BҨ WkHU5n%`ܧ{\^-9xÛڬjIfmI_aC5uK5fg |H|o^-3}qWÚYl.Ǻ(!6s(5Ӂ'sȓzz{8?^8Qӟ/DOs^|0!FG gi^t+%!?ߌ/NG30v?DMyޓ̩uGH/h:{5}{8Ի, WA:?*9>RaNI?tV8xC $.Hy*J|yC҈7A$=⪓*q}*S\AR'WU`u:pUTJkzϮvʼEi'WU`w:`Um.Ekqz5pv|=īJ~L\DžI \ jWwъY}T3YUK: ;i>W, kJ֥|]_dZ&3a2tݥ}PvnH2w Duw9o;"`T:mC :˺{ AIh6 Hjų~{w"7w8ړ|N *#G6-elY7_-/vkOO:]{hQe><ݒ._wF_6ߏds KfEh #KI,-ɲܶq-v"X aQK23/.nm?cnkyƥ3#}pAXs/i-'mm Ō#yY;妢!Da?Wjcj^{96'!?UOuiqS)}(Lp%gCvvi$xzZϰٵ0Es+G8D0 FfnW1B/fea25#C|YE?d6J46k^~aqQpv`{>>7oLH?^qU.P7~hvwgz{ _k3ߧW~/\>x 6#%[5|s{y5Y#b ,W_ mTm{[']o{ǬN z,XvAm>3ɮ)~꼓۞D㒒Rie0m7cZrp3Lsv[JhМuX?܉M??O??_ݏ.pŇѮ FζœEOޣϏxkS?zvTgsU]R}jBۍV )?v~Ő[ J9M%S'ͿBb_/⬉ZKb)UNA騏/AcN%œ&~o'a&/L֕RT^gzC)r'LS{3*gH`˓^0_Z{~?E $!<3|>1@]wC<Ü[r!xe*oY\s1NnR=9qPE6ˆg);GeٷaUtkG-Ȥ[pݑy߸.\iq|?ŭ3QF)U5 {1* %ժ @J pפ2z m6~v}ܾ; tȰ#j@5ːY)1GFC Ủ+syٜ΋ Imj, /GYG36JC1h\P;="hIC*瓤܏m0lX :KRBAsj<&0 )%ņ[mnAsj| (}b̝_pLD=#:+:j^'[ LuBSѝKQ ?(zPd :+p kLݺX]pAc'ӳܤ]uOsB^Zj[݊1%Uvsm9-l ..N[榗K֞M(!][ȝf5Bkv7]ZH]sal/2TtwEw/wWӻw7O?ҐtѬ;+5Z^p㳣=Z^s| ;ڛ_m>sX4nCs>/%:?x.zg+qmg0R Is B`[xL8\,P,e^s K2N'ors[Zz*+A% {;dO򖩾8S}b01؋QGJoV/@_/uBt{_.bڭ:Xl[2w$2gQw8)g\qz}}$JVT9Y)` D2LdA=D!1H, -Tb@!%, TwP>%ŵcxk=tZs!Hr 'gMV(`s1,X&sa0$ $:hkvv2hx_:OQ}ŇeQM>G>ΪȤ vY\|bS0s4`U,DPm\iT\i~K"?RH*\*fk\{,I.`Trt;Y2ūM#n抩rL4zyž`-ʼn8m;΂T5"F<3Nyb.%Sjp\ z΍^Gps@en: T92hXa ވ=.Y6ŵW_mf]94Uv^20KɬyB.U[>o(ȭ TKwUXL@N!ޥ@Q HYR&V4G4OZt"9wS>,EyHsY#" g`N-R1D%M0zx&o=12X(Cd>ب! /#H%^Jܪ`e@eI"8(6ޣp4ż_5l%D ql($hdh Ă)BEJ,ҫnCs"3N30.DRl[ "8aK*X#01'ʹy NBA:-)֫S ~Q[' ԊΝ0dJ#XbV$ (c[;!;(cF=%7ba j́G2 ͟.\iqYLIv&,2e}fqP)"򝫒rv)/wm"-uHvuyk'lAPa^^ZMivۻ/iYxkReH+U}˖yK;B:Ymz?oIPdXqh6~F͞,/b~{{we] jٶHHHK:o+JM!x5)Ҽ޵q$ep/8&.1~T\$CRrA!%%YjY<lJ3Ù骯GV(RjJS|)$pClKL7Sj 9rQz*8*#݌-r-DڠkP"D'$^TGֹz`+e@8$$Ι+2Ypk%F4u\Ëu0vIlt -!fߞۿ/m{7!'/xR> c*+v9TKxI3"+^gGn\d)j]7'Ӂk"l%x-GK(TZ:a6֮XK ]hy]4iՊWӓ[?J^9|6%Z?Nu|kTrڗRUTA;8C Hs8 \K.YCFzRQF$"&C`e[[䂠V]c97%^KϹ j#c5qnUTjq[,4 XxXx*Qqa/-^B*?O&ry_*XUtJK֠ 2 f`RL. UnCNiez I0&j(ՔpduY#s̢K_zir1iǀ}x2H>$,+Q%ǽ63f92[3#B6XL9.br /:, 5G(@rgX$]&8aԗ8a}Ajq[Dt 8 U=(hi !_Rk䠺Rkn"ApXr({UY2m|cT ˺0" )FF_u4:_%OSL`ۤSԨTd]ꭈl zorꫩmT6؛T0#eaC`$S FE""N KK2&yY]'̣ySMCnku/qxY&'鐶;Đ~M:"Q"N6 CSL&i#M!RV!^XѢɬz`r¿q22w5zk^E[K.&2mIGTQ4ؘ>de(ӦnXr*R(,aq.*wPsPGRH"ϓb>FN1g'v+?;× ʂT,B}6;j6 e@9ubSBBURǔ`)(+,ĸD`$ZyAv \(E}WM') p9wvD޶ ɒ||sHQ9ݎF;舫OS}D]_츟I|*-|s؄4JUfd1*SШER xx b& гzyjȣVkmdavƋV2&##|Vȁ+8 O)tIt8?0rz?AJ|^tyͲ[=qgFQBJ9$h>SE^+I i`a6 \bYbB8m/xUѼ|<=\]#݁PTvj8>zfCih:bb\u ,ߎI/fYMFm#F6/iHcy}8Iݴd<ف/eݯYO}=Zx!|i݁Gk,-_0=Wл$EX;WHvttty9swQb| i,Һ&uZuz_dtLXe&*XpŅ6  vdݩPn2 v]h63'yϯxsLZY D`NzCW}T0tX[)#b\o•/tUњWU[C@WWT +\ӛ;OWo̞]okS? .Ah(< ݢ+s]zi#b^U\_誢tUQni])  ]U wZ֙UEٵz큮 ]izCW.UEUQЕOtUcu +J9홺7DU_j{y;puR-ݗ}L̞3h7 3hzAfKalqHjP̾^ns1^fcUCoor? ׄō/}<4kqYj/.߄EZ߭6EۨߎfjIU ucp^Y!v _~;!Jo]iؓxcݾklw?K_o-mհbsi^OW;|k#}?x][Do0D,&3yLɷto+?.m؜flL//.=,G#ILő#9&+!zmbG9f4mTBqdNCNQ*Hc/喌6phz>j/@DqFp^)KݞN5/+-Hif:O[icVx^)_~v2A}\Xo?wou=CX\m|DHF4s'T|s4ٖ5Uzj/?+ +ѥ" Ez{G֤8#H iUvRK6&e Bi^*:t .!IvX6Ќ^!}58/ %OS0&' C`rTw!$" Q J|Y<&fM*_ͱtXg&bbI(usA N:,5gwa<;\dq̌>*~&~>wx)ť8>hh=xhՙ}R32t]Yœsu{$Ϟf^3tز}m7}-n'.IC|p></):p_wSrqT @ovO囹uBW>\؟{ X_y sJ3Z/,|wCQWq7Sd}>>]h↚ro#6%VCn=1.vPZ'RZP2`}oJh/%VP%q%ﰄr.0<@A>5CM_Vvna xGIC]K݁0P>]0촯U]X<턆_Z7֨o3 }A1}y;`ͣ]Fv$?D'L[sےz~4!E7aM`|[v!]FE!b̃{Zg5˵Zݩu> qPX՛H),5XIDP ʰc; @}ʸV%(YSD9jbN!F/&! +E(1x-Kgςg)3u&KmWe} G.I:b2؀x22(>#Jޣ,Fes 'kۭj& %xe(>)NF2D"E-eSF9&ǺUjQ[҇1of,KiDX`7^;#b'ox*~Q|D^,s^6Y>LGRgݷ;:6XVy^hڴnovbgSLx FM<'kc>;1T ?QR{%DV:,9fs(!z( ́+EV!Imfl sZ*qakq,b\)n{[Ve|4eyZ&ӏ|3rD)]vH>ƪ\F*((/P"XV[9U4?,A(,&X8Ԟ- $t-3vkp^f8+ݚu =(^a&:IbH%p@Ǣ3dH18g !Wrrڴʇ8"3BsPhS $],.&C⠚058/2fLˆDZ[fD?0Y :$ h:cJ)@JAA22AB!NqG<@dV*rEaxMx+|ؚuˇe>Pfm?t#_5xgBsst\Q, (OTh·W?..>欉.}B3.>ۨw;ЀDpGSQ r()oT]︝7ڮCkN; dKY,DcU(H,LI0"T&ڰ|W#^Vt[p^ܡc,o֚m~OA+&wiͪI,דSheϫf5g[ zDz)غ^esKK' Rh|1!f*4hi~swHai*5u[ Mte&p(ZA?ؔ,ƘMvLj:DlC݇DH)\,u%i/D]j6C%4Imo ̡Z@+k2*|G:XC>+iǣ*wdA񠥙-\(ޠA: L:  .O H|cs[܆Uͬ.yg#KТ\2*muIR>%V@@{x ulwFõTzpND<^g,ip%te?k Ξ~~n4 Owe` ;vrđ)!\-R6u[pDcxh'=x/T;kI$[Vi55Q7dUXɣH0lyxR+-]ڳqi[= ǘQ5Rk(L;Nu?#>IIF]+enQen}Z*s;~m g6>bRF~`!!B`/\9Y2SR=( meD]ݙ9ȱ·-;}>+M` HM `"\Bf!픔nIJb!c0G{cZ}5 9;\cv^ƻfyI!-D8Y$ &'*6^(zIWZ(;)H':5:v/:`:WXc;TXc;Q Gv[}LeY/Md ٰZVvB,\HE:T Iyp/ZآUHH" 1"h(yE<:i v!b,ךjW)x-K*Ljd<)Vևd2""&ZH0<)c"-Z#5a&ָ6?.M ^p` dzEgCP0`}kgv~dv2I$S JQt)*3h"dhLa3Rhd'v"Z޷N0|M+F9:%K9r:2fQf\ #O$LzŨH^; nV.lz6{Wrh]~1\+u,ap`\ ;`u24QګYSrLC}u`_Soe!̕g/_fbCsäY,.O~EBI͒qQH *yo{ojߢ%?cpafZT.xgjz`499<'ފ{krʼn& xI.ӿ՗~|JG xk58{7'B )i,7XNfl{b[ { e)b٘5k@!%T_z Tu̇΋Jq$mruaߎy'aZ-UAM\\PPwX>k]#Κ!цݤ .Tb jl"(t: NP4NG҇Đ>j%>ʡ\wZ1OuC 92:z.ґ?KF'Ae4zq@ʪLS(HH'ZzixU=KB#nI$hz".)J)oU{R6F08GTF?[*'(3n52$\\y)#xѷ(eGJ5mc O"œy KτQE0*[FJ$) @؄ A2`{¨U1kK)µzqyvܶȹeRSՖl?! @ISdsqI|#>S;Ǝ` (.l7ooc~eN]Ÿߋ_*UVC_$kNZ'a>$0 K|Rӱqd}X]56,V7)ζG;@rgy(x4,?ӔJ@HfD01:8icL/3a%P0 Vtb րf%,Iː`dI ~i,Nit|X/g'JYƦ%,̾\U?E||~)W(;Li9mA>4ˁMW&* ƈYd >8U6+R,=Tk*`M|6(ۙOa9~mU߿]?+u30 6&4y0 <tяwZ5*vsu=闵9~+)ځ[%QxU׃Ks#LЊ t%q]/dWiՏt)KʝR(@8_}z%'ƽ\O `>!#\Kb ZHq )'Qh4N;EHWѻNOzlB%F=w8N{db-GIj4 oIY3Vb0B\sr4jmM|!Eys ^tr:םqW꺳x)~`N ⁛.fjZ6*²T98M؂KcgmI 9砃)E 0û [ZwMJҝ_ҁTFoG)@v] \5*Cz ^ wE%K@E*ۈzBmbbum? pg4RېQ4 ڥJqe߆I.|w8+FhfChhXpU^S? [^ZbՋjN[ѪdmR=R3HR^?NmJ+$T΂#"SwP!PjMA#W7~/MUp3LB.YHNj/|GR|CKN Nb"Li)Wqͱëw6%g >ec4j@`6NM!B2ǭB 8p;mykTGS5Dw7yPZߨl Mg 'o@nK9RY&|6"z3QGt+i4ծƻ'BZlY)T0Y}t&UʼʻYѾY\qsc#wձ} hdzy ake-e` 0GM,0jnDh:vF6vI̗󞯠{"] VT`vCh;iTPTa4jOz$ _M|zekO_xQ~G|1wS1K?;\<*m'aV134?aV0vD@bvEI@(Kd68%9KU?k J &$Yqht76,j{+؆JٰHǿW/{[]*JQ=Wy8?/jq7;V·dGo\]ᣊ%)cRezà Vcv1OAw^ v_kZ&M7Zʴe *dT8(*LUiydڢr2m݋"Ӗ+9K `%{wZDP$QZH'T)"x`8."L[Kieڟ0=bm䑚VhXq`^»;fH!ixʢ%#Sj =s4FjZ B82Q "ƑrW^)ǝ6*2F]DЌi +NVHaǰ}(C܀$wqQc]0$T]z'`ȕQ;d I0l}D.11c+!% _+/sP x4Ktnk dߢsٕP<ӝ#<9!w,{BΕ-bhs~2k#l%( 8.HIz:'\vceth03 e ~$ȉR4gKr^[eDD ~lH YFEAN!մ&jpKXQ(\!+bP18>x9IȰ, Ƹ#aJ)$ȹ]Bimҵ ^AcMaO`}Mdύ;]]!KkY0T-抛3.\ny0:(a]ܮbeyĐ97 QHVȇ!Di&)*0'RV3<8i#D(@z 2U'&h3p<|:r7m3ir>7if-cBa%gj{G_!.z@p· f-cʖVgg.O%˴GHYgl>UY]KTDoJ\R+J<RZU y$&Fn`wRu%.IVg|2*95c2(@܁-`pv[F?l2YTBjek_PF1mG7)E#aey%⹲,Ur4l)K, dLbV8<=c?VRggx A͢(HRV,pI*:pBI4%u LsU%CB*-3>0XG%TZX I[tl{> -g)~ #?7Wy2CO룷7,>2[->7 E},?~~&ڀFRjiJ$DZ ?3'XM*bHS98s},{z](}JxyKX02>=/nKfZݢAa5Xy(w))'Zsl뼺Jq~`7i,vLſ|åkU|\ٗ3ףgT_vw&[ޒ-:\_tVܲ_]7딮k Qh n}:\<5sWкulx0oߢsBnw֗x { <#֪y;ۦmm3 8k;qB~a'%v5aYB8>:#n9T²,je'I+RT*m\$qKH^܋g>^}N||콸~/N*:*'X>DiI1+_AFLJ(WAma˨h*Zj5^+9gA*"J*ah>JMSxRYD閻Y{]9KW`*3ǫ˴:d IL~|ezͲLGj2]vm2Xl\#\mrIph{k@Iه׹'V-E cDU'#U78t +Ձ@W2̢]Q& ¥"Be3n|+ "#z\fCW6^DھzS+Ι:#]q)ƻL*=+@)@WHW+eFt%8BWV1(2] ]IA4ɻBfCWP ]!ZNWRtut$woD_4#TE+U.Wi/ur*0LD^H~Ax-?ŏM1,M#Fe5^pzN-ڸzqkf˞h\^u`h7dR@0-c;#ap.+åPJ}b+M1e4o# xvH J+DRS`3sY \r}8%p~ؒ9B2J ]!Z#NWT6ЕX]`D6tp ˅c %S] ]YDV,n>th}+Dz=t[^)?b`slWy\ꆖ)J3JtԶ6l r ]!ZNW"t芁Ɉ56f+e4#BeCWWg]!Z{ QZ= ҕPd]`KٻBBBWV~J(E0ՋЕ"2#BZfCWp ]!ZNW {WIW4zwʂ B맪*J/0n-x/=2OLGRWJ#UҖ/ٓ#z2_^9 R %\̦ø}5.`4iC6J GDEU@l饷e$A0y kGѲ$ePٖZ)Jh[sK*U]B?M7dE۞!k!ۨ% $V Nbie09,NEJSك$+k=+U.+9D{O(iXɽJNkaΈp_ b]uٱ#G!tCJӳdU]=5 B'l @K'df+flڻ~j+k+faNWoz+,2Z "\s+D܅NWЁNUw͆b]!ZNW)ҕԤY ZxAɅs*#v˸I+bZFq#Pjw i*#[p5% 2,bHFt!Е&P ]!ZNWRN 1F [wpy62t(RJ-͈0B6Ѿ|WCW9BXcW<Ъ#yWPѕ@Wv}EOLdDWذl Z ]ZH Q@W/BWjeDWŅ*B%wut=H! U5t(J0e͈ӌl Ms+@;]J۷(] ]iq3f?于 r]~9dvf'Z]?-?z6J1"R;SZDbp$3ϊx=&(\y?:e1u" Br(_o?KMU=<;knϳr`jx8jlj@SBm\y kE0ӰC]¥œ3wq1dA_ z%ߜ=~?ziʭ}f[9,߮7Elzsq;WImsWTE>z ]ͥ>(2cbR5l|I3Ӿ6v` ZlM%ml;!-~n2q0}uZBNh yᎵ^;(tx5ڕM&20[<6u0Wt(|'fc- AULXMFbJZpFU;}n{gW[ݷu:۶9p9Ru4u3.nᖙnjݘu\?+܃!6TۦQ`Ҫ^y8bRp뮒{+ZmUqiuɃ11GuFo:%$7eъf J9fNi3Jj:3\ J8y{af图voI-Pu8]㓎^/fA{IyUj?Ec>ޓZMWBۊs_ gUi]1U10!i_*(:y3܈vhs=#dr~0pK̳loSy_n/λ1Ϲۻ?w{tq^ %jse )wB|]@9V߿޸/y8nʘ>iI`_:&Յ1a*L*QextFV(N|+qT글x%,V|'8\i˘ٗYS]k\Of:ϷpSr4biT^nNrJx%uDrAeD(5ĵM2@&q9qI&>984TTD¾Qmd))<բ*(tH .QkPotUS)0!l+Za %uH:%h(3xCmJ} KH0UrgW3-LrUH5Tt 3t% S=xR2iae%B*k&A8.c1V) >f%07VUуG'"dlsD~;4u F ͭgP+ Yi"q&A;sz_aX~!1>$4X.VUvB+1B0P$'@1hDZbŞ}ftB?dbq 9 j kA=N*㽰A%?a'Z]Rʶͭ(ZU$_ Ţ .UZJ ` nwV+= ZAwT5,vtF~YP4BZ%{lt)K(<4{4:<|k@y+ %eâeD)SAV/_A2[^ [Ficn eXLĝd] XqfCwducp?K+5Rl{ W, *[ XҲ5 [(~P6Pc.roPPCo*ȦC` Ò`x X5lGT&0VN ddR}Mhp xӒ`AV5Y[4 Lb oDx,8h&j! ]V۞dWNc@7CAw^KCp2Pư)0`uq%Z ʄv%H  OQU0@xb΂G sG!!.A D5t)ԚI0Ej>dDuGy(e;@gih!\= :7XaSq;S -tiw,\ J!ТwTy'T),d@_PT`Hn|A+AŃ,׽( E/obKR  eFkDt(X(k! e 5P #o m${µ8b!z_P4>'eP#ڧXP+fcǘAQRtF0`OsƣvfzOlN-^I [ŬΎ]`100=Cw /%endUm jmlAhLkuLΣ'`]EBԅ{[ɐ4Ra2Y,bw wt,Ѳ<=%f0A 9!YF[[ƣVH܁m CǷ>.,d!:T?QFjΩN&PnJeDv*0X Lvt?w{}vy'kWمX#W`>^b=ѠeTA!_bQoFBM ^R@*| $$_L)X6$XW nk 9v5+ڱ,סPVv%m1E5h#VF/7xS=rN^lţp,A,J3-VUBi5eH?< B#;썇#"LO0qQcCeXPkY!5ZN,UhE4?V w*55gV3 o= Z?zkJv ioQ! IHu }>6.A ?r7ǭ ]0؀q?{ 7zwoFO L4 Pwx@^,4zVP6 `Z= >;Z,FW:nk昴5Üg5rFn bL\^gã[/*3&5y  '? 9k9n?T3F LtNJtt `XRK;hJyxH oٮf1 +Mkj] $\a~G"o U0 b^8R.E#GIFp \$XnB)1QFǤz ڊ[Vjc- Rת%_63T;UTk>@׶Yo5kPqab2|k΁ >bqWs( (BZtàe f@F\ ?RčvXLJKF$4kzX waPJ #Itk nE=Ap*\/ZR- -1P˱XTZI~up͇ .pZ͘-.mFyg'~bQ!wԢ`|JOL0H5zFnǽa+>ڶ[:h 2iۜ_AUo0?}/i}ؖ rk߿y l{n$mA\~x`׼^~qpI^_lZi 0 7z|~m…?2A4wVx[?_m<ͯNon/ߞ[+__wzsa@;p [CӁ_H_ց&J,bϒbQ= v$ IQ$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@_o|f$ % tI k|$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@:$tk}0|U{{wo_' .p24Mpᒚ%h]{pQF-s.(*MDW i0 ]*k+FwЕDODW g1 ]1\ZtA:K i ?sXWhKy[XPyOBx3p.Tvo ۼ_yw{ w;n;f;t.T6)Om)}lﯮwשvվ7z? _iud/O|\5*͇Օ`ޯݶe0r7/_t bqjSʴǫW jq9q|%ĿUK׭*-oIkMʩmdzѾ=*.V6x~wCl]on(Cwڷo7=y|%SŸ=yA5I-8]K͖jmk&WU U,nupP#}mthTri})Y1S*Cjk-[oSncѪ! $ٷȾ`F>m4TUWx_wF_NL>hi"'Y'g~z6;9d},N̴i g5d{GQ%tutCHqz1Deyp4mk+J#]%H)AFiiцB`I ]}5te9BEt]Ntu\RtuZNCWǡ+32BW/zw4]p8y>8'{qT]+,tQQWHW6Vn"`+pZ2~t() ]!]Q Dtŀ 2Hk+FH kI2AdߡKrJ]DL4G]w+<N.E g'åi6M1ZzJ#tutI;k'+컦+MlId7MqBWCWf"b~upz}]Uit(lIz芞9t(tܩqpO cFO 8vetEBW/zY4tVOCW֤$%tutesLDW2ݵ4 ]mat( ]!]YvM:NW3+Z֮OCW딟mpk+FҕLfwk.EW 7YѦvr)Α ֯\%zfbwix%|s)H^"pw 3DϿn~"<2 7Ncs֪?#+qpi^X`\r_3E2$%ۻ~3|J$EJ#qh 3QB.?uUUPiTޠ*9#uUYU\UV>PilޠrFr*Jv.PkZ *7ԁS.q6.!~֞V]E;YQ[JuS,ܜ*|7؟Һe3q2f+LqmAPФ H.Mn6(yd,ߠY6-|-nJ_LP4.GaL%(,W0HqN,8)uAz`*8<ƵƱrޞe^gtdxW3diKVw |vfpvuaҟfkFۛ vS.e;knqk9]rc$NDO7wGֲ >, Sӈ5pSAO&j@i!G%;QQZr `-B]JY9 R{BF* Hc+ %, KeZ28e:^GRRsd @*ԬY0I /Oмz43  ƖO?μ> "%:y@ԧV'(m2d\9%tjcwLY?#肓`3 KnYx$})\9wzvпt4\nƃ4wj=6s_#ϕR[=tV0h}и:]CgOZxʇmWD( 뜝Vgpȷu%h\sUb+Qjh`ya-cJ.Q, (b![̠}yZ"Y23C$ljVF6<6i.P`Cm&i h *EZ kz3PHbv`'o2or3]53wgN }yt L r0T @h4 "zsB3zDρD {mH^ y)jkrr%0Ĕ| Yk{.#XLY3EWB(iVUd:ˬT<+}Bˍ6Td0$j!}-&HU]q2EC`)+PPZ뢠F>'>h8ٕ"{OûA) t ^% FQX\ZH}b,Ri>նTƢ_y'&4;1F̾hk(pHp1{-E\r_rR *˔>&"W3Ns<H۲bcir_'9]x4(m0Wng׋ {m7;{x(4)AJ%RQd;&eb<ѝe"G67G1rΧdn˜A2S= ,KJH6 󁣏sdWbyB>^Q5 ? n+QZ<}4:%oZeN'IX7Nw_T( Ϊrvq1)13R撲א!|>R\cgQh+?r/}( \,AVY6ނbB> PK%~k {d֪T~tЯ+k'' 2 **"8[*{0. L@d&~U~-8 l㳳J:kDU]Nݝ|GO gQgFR7` M:x{hp7ۛ~M+tɒ;UKu`k¿sH&r/in5.~(n> xlriOW/Op4|f\bGj$$s-~nZ^{r??PwhW}>Q!stWixX~_0O^f]G5E׌IOHkDjP0]D6gh>cGy20tˢZ@cn\XRNRxKeBFʅ^.28fc+{Xܓhi>\T,V(K%ja)5Q>XULAtYx{Fq~ɱC{Q_j;璳{o̗KADJ^@S1)-KGtr ?fYTLapu(D>Bh0{;qyZıFr2ϩ ;OZswLxcBrs^_gs5\n~<4tk Q.Z\,NXxCeI^OQIt.MvMrrz!1{pv>@E\Nqh):`qŴ h.BKѯUex8bGjQ=-}]#MG,xT&IK*ʤ98< p3;iv$Z+D ұJ ǔEVjc)9kzwS<Ѡz[?e,Pf!&(܉hQ:b9St0F$51TFILaiMy|L)-sYǃO\2Ϊ5osBG$&~|C2bB]XpUTD 6L[nJxM-&sYz[FQn5):  %frmtQKWU2@M'c-g0ۂJ|*ځO>HZ)A Nt3)chɺW/E FZ9,ʿD֭,)#̶2˸1O AtO؀|U%b2?E:|҂}ӿkpGB{Vrq+_ fWtWn_k𭞷GjڍׇƶCZ'BZF1*_ iNG@Y \%Rg9HIlu3Hb2#^yG2WɽIC]zXg9`R(gS*DPFkK5*K»(cBΙ)EQpcZ9YbID/oU|o>ٸx7uhwމsZ} 0J,0%x(30D&gƃgYg;h6Kb3ֺBEexx+HC3H zc!ܢVL<ɤ(,P* MduP+}|IEtw6࿴$>o=wmH@MUpݻj~H\*4$)+Rv|k IEGU63xꑆ)&] xOZG0@ A轕E xdL[ !X/( [@tqע)$ar:GkƆL .ڨ凙?>wFf; a^GAży9yKd)_~Ȧ- \$_tG$Fx ? J+!5EFlɹu8KN\|9Pv`jTBc( R8{l6h4fX,XXxz;ͭN~~C~]Lf& Gl&>tY1VeKrRPm\C)b+ c^TR d g[8D) $#v3qF8PPv =x/39b&EJ:{ 0SNIsf JN!sh"N =C&X $]282F ju=f<}PZP~<}cD"xƛ!UFYbLW)2T_#lb;(2N dOCŽc+396&JbaQ31G622(r)q$-I+0U[cDl&ÈUGŦ!u6ӒG⢀ow%V12jq!gtPHI;#28J\Q6 $x(xL;1@XÂ/b*hj: /KPk:T)j:|59Y|ٻ=/8;Iyт-wiE|Hg0 @~"zWI T%Jj(6z x9,ZQ8d8"\Օ+C0d*s((H,tI0Ƙ"T:3mtk *V~~b 1v,}syjObWh\C; .Ӌ [Rvn;1CxLFFpB稔/t@L%3}Cj캺`v6dM ʜuHQ\G21&i*mqm2ZDA) #Grn`aWWz!j.U_j5j&C(&Xy||y+g`=,q :(~RjfCVyCue9uƺYU#ѱBrYna5/Ƃ;;Fo- ȕ![',8IuZCZ~k a[nAxNzSE;W3& jlgz/zEݧ|PS5ݱ)dgGRu5e]j+!V@`Z/E@N? %8b($Q}ТNMԂLːт$0l<|ݮVZNf;K PyޢʘP\fk}?-ӭ?<#cA˫rģXur_n~g>JPZ R)TFe!͟SH5E40aٮ.?v&H3yƷY'1-1ԉA^%#dfNIjI2P. hgA% k!:8 zk {]I76[^^"r#Q'B@6y&e7V5'bňV6ًW hHA:ّ-СG`0Zt/x_gjG1D1v#>e2㌗$Ʉ[A# T !PT."6m5B*CRC(cb/(Lb IDcydIk,|D byصen!!1}:qMg^tu?Mj̽HͿbV&mcp1hobNVz2mq.ht@\ M Wls6?T~n?mrr_3xM3I/(.F)!,ibG1PB0"m2G̥рR%ʨ1\BiQCoBiȀS$\P7vgMR7l` r1{ۭ/pMMn|OV/)i2t 0tHѐ1)xE4*xf<@NKCʌ"ƪ9mTC$)%AɇI Wh@&S&U϶ 89^S)5^ oBtYݣx'Q2 p^YLFXo;lv_P$ud7 !K-@o5 qZ}H s.D`<$C J! :\ u,٘=ژ ۘ 'aS\|ctƗtwy:㏤|G%rGS)N4t6YҲWxwo5rf_:'?-Ӝ/}Z4ɋ|x*H {G'oG[pѫ:VM_vm_~X-|t(*sB[|FB6g(ouPa jm,LvIzҧ(Qo|[·C\ M7+eTXa1t#Auf?ߕr@/Q˘&wxe,*T}Urw5/O,&-`'EX{$Z[MyFېQ4n"kRnvx@Vٮ;6y?]XdžJMܪj '=L/wɺvgk bdzmFN0xVŹ/F;zE2**bk>>v1l.>%%{+0RINia[WH 9s!_HgX%S.GVՠU@ U L;st%- 9U/xR݈|Ұa{ *^ '_JfV" W-s5w|nZrZKT18W)M&޲/d[m%Ξ?دxkcAӊ)[mȆVt1:eĦ+g}lV|-]ݚ(~͒/LN֭hlN`<䝭YQHP`C tᙬSWZ𙦴QD֋ȵ u^We QJ`A *8n.Y?}%{ M{-fVc6>Rl QU!2p#l DC!56ǬUcg:EBV( |2dw,dA P${Bjjsx(zX]XqC]޸Rޠ}=uG֣U_ig߯;QBe2>;_=Yo +tũ:G;7J껯t2reV;/*S%Og=FOA/n9/9;*nnrUw+Ep}E]!D}V.SjUFuXlNS:㪳tuzmo[7ǿ0X`^l%Iq#Jk\BTUZҶ&$UX5dWs⤧pA|{#eYRcE9ѳ-W0E:2`~z:ʋܛM|!>N~Tix[5e&ӐkM,],[+Ma<|s^289s d~Y1rN$8f/BY) R>2ĩ:+ ŭ~'$XD6bbqk~ߝޞcqٻbvhemf~>񨾭m/]lq_4x/Ycu.RAmbˆ:2.̻P:TrͥQ#*U3s's 4ܴ\*^ɕŕP\& ,$YRp"oZ!k%KYh\ 锂/_!qͪ5o3}zeW?%ֻ~>KWAEm܈E$s:qu0<b~}I 6'^q.+$$X=kSkZi\\TP{yHoO" sIssbsb KD&xK40WbjٳGmQs3O5 ?jvzy*,TsfR=*VIm!Y&Iw'=NDT#T j;xT4|r =np]5n2SLLj+$lOE,qj Z7z *P\{6=&:r\#aq5L<#a*\ Wmzu+N8 WZWP{1JfJiaWlu?jrԺNWM%ӄ#i` (\)\/jjǎrl W" { {'z5v\A%ƀ WG+m#\ie ZKcUScĕ%g#//YoyYư8v|Ygs#sMyM?`,U':١֋Qt-h Ers~u󪴊U@LOd٩\,^.Uک+F#M]qßnz}pکY?떥h*DmmjN"CGZ%*>D ajzmlONl@5rP{ T4M~5.cGjWIlj5,'\!U{ɕ㪩qRW]AR% ZcUS9D WO+VO[WWM<Pᆨ-'\!kK npz0֌WP rfT.ܙ F}rҠEaFaA1S#xRҠV.Zy q,\Q.[̔fUT{+USKj*y:J\Y{Վ<ϕ'%PNM+M| tzc7ѝ'%i&T$Ol0~6Y$2ԋԺ P酙#3x$2aynJWPW Ʃ{iױ7af[AePބru??2t?枟W0DѾ^Y\,kc P;էWmBg5w]h/} ?x^"yNW% wWׯ/a, vDկE[_Bk#l)oq&߷]Ύ'xS?{эokOOY׷{sxe)Wkpϖ_a7-7?-ھ,PrDue ẆeK?|us;\5_HyGc}@#~hR:2M2M,Wei6Aw9Zb,]!}U(C"! mJ+mCYL.̒لVR+B!WLCZdRlK@gka,QFF2Fj;)cOP ,T9ᢓVK"j)xlUu=PCN_z{j-ժuQU @Ib"9!0-\aKwIF3RFšcVGPLѱIk!c5$a̞Z_E2[46 YZZm5A֤2CdMmS;$X\L ^)ZWn&W•9!f Z*\RߜT>,pօ$U҄d}GPǠ}2:X؆b?7JW%iE!RCI 2E%D5^ƠL$UZI䥮MRv 8u .cX06YBϧImj=BK+LL ¿dXO!E"]L.J{+ ^d*II{DEɄ:d%0E9#kdOR@ fX0!o+ຠZRȪ0(D^m.]ra K%x[x46Ml<6blph(_<Z>*|;Ceb/ eIp +WoVUWuDM tj6PRY6cVcp"|-b @Tb_H͵Cwie`;L1OoMkÛobgZ{8_!v #0`Awuu EjDʱ7sE٦˼!Gdy֩sڨN6ac{]` mӅLs 1*76Bw9@GtU:Cҥڵ]5-UFD1L:E;5k=ĴWHąsAnR|EIH2+JeH6Q&g tq7X1FޓLڣUCYknu› q; gXdu~Jr*CcW"#?OAXż"یlM / HbE"ߟnb67eͻu2eQ[$CW'`j0+tm A7ӻѣA]J_7nbyXO!B. 5H HjL j^W @ܦnt ",A%DEB!jwXC-DWhk,#tqCJ5" Aa8R"(6!.bQL[) Kd@PkA;ЃjV"Pa2,FvHq ,!Ht"Vd|ӵ tt=̤ij5$YIx"5tDd^w;BZFon$l@6kd_]oAKZ%L `n 66kvܣ! ]шw ΃B?0 ՠ)BgnqW,u)ຎ`L2i1 JXArzkj$>LvjNEdNMܾ}/0yA8|-)vEJW Mvg357/_&&޺f 0 tBo:9r\NΤ4B49 ׫/XҙV^Ͷq:ymEnԗn&F|mqɫɫzv6m&} J 3uZ[EAS#WE.@jmq8U Ʋ@^Lv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; %"=HUo,O(g'1:f N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:^'P_ RHh|9N u DhH{! tN ! N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:^'rp%9v2\]fƿ':F'P;b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vz#{JM}y}.) ;.u`١$RrKm1%7.dK`\zgKW]"ʔBWV41Uj*J`W]\JHwFA$Ԭ>zy*Bz?aṕ0^b\t%Еdء  +v)A+?"6 LWGHW 3"R1 +kC)tEhǯe`uute VDW`-*F]C=pʭ LWCWVJ++\'vA^]Y5 ѕfj|FE?e=o>/T7G y{Z8&5:Y8˓rRyX2_*缪^Q'5fB r1˷ }Z@lcLп/'5nȡFi(Ldjuu9we}i|ry"n1_L_^ⷛۉqS~y3?ӹxS='ˆRei$6Q6[Fx0'fcLMQͪ2ZUT)UyvOdu{fdɧc'*@S:*mW38M' u7[?xȺ'w]A~[%0W[}71kM*e&N =?|jZbfJ3njTSsunXP&Gm9 dr6gr@4_x:LKz@AtE]9U7s hTc+B1GP> +ƓW ]btE(M#]E}SR'OP4/")BG_19>RzuJ'5C ^hjZu rld5TZZS]F_ZCWCW.jjtE;8l]`:Rʙ +EOi Z'PjVWHWZ+]`W]PҽntE(c+T `%M1tEp/c+B5ҕP%]`%|1tEp(ah;]Jtut啋]\E^]e֠y=iKD69UiH)(|jZȠEAȭ~cJ*H?vji&Hg^v;Mn &UG[ GjswS 1u~J ,Jz[uN׫_ioY̮/->X__zqr/_L|~߈)޾˜cgYwN^Ӽ8yh :? {EKjq=kW7MaɢߜVrrvOLX2_)Mr[kHnN=`qK}>Z*4ўߞ 7F:8W8_Q̶{@N5@lW'_E:yj1Hjo?3o$/ϧo5Nϫ痷[Zmͷٶv3N8ɧw(-JRkl+F#"*EѳZJ?6 Oւf.ؚr.J\WLrIh;]*ܒ* ʩ&"n guЕ~Sa[Bi`'ājO^?@WКa(]t>v襵6ʂ -ah (d:FR6( +M1tEpmWV 5v"2]!]Q4\+e)tEh;]ʱ3]}2H +V/d{CPn=zx:Ū `QW׸R_"+ofc߁nP̖ixKNO?oռ^ e[+wZ4-gOv#;J ;pJg~;?E7A.uLh c*cWZr_ ]ZmNWmc%Mpph) mNW@)_'?FA ,jY ]W$@곡+ȡ7"A?t0 ^ aJ3)te>vejESkኡ+t)tEh;]J똮T6 R0vJ/tʻ 5"ڟjʱ9h> ]j\[̵+ "LWGHWTU-Ucܟj@@% Ƿ?HES@S)h[] wkd)Z7v@(. ^hJJs<(QU{mx5Os%QU +VCW6ʱP*: "@kP*EЕ}S!ߠe`MWzoahԸa(yte>v芌PZCWз臡 +)J)aF l Y ]ZNW#+T4 "CWZi!u)tEh;]ʱ=ɉЕQiS]芡+=~RP:J*h! +|Bap-m0c+B#1ҕVǰvy=YFژci "oQ98 ϐQ6^-3` *l=yL^J1rTSIޠvN:v[~leoWyj^m ~*n~3y/Uzy{'xvUjiyB?RZT΁MJMU~A?g;ӿ f/a|{=ۨA"m?~X|O|]&'5WE+}v] իlyiͽvD<^n8!ɾ_VH 'H6Afx3ѧRDyL=0#RW&tg%0l1t`;m iK?Uɻ:Vhqu7!y3*?̧YCO$ Ns\*@R Di0VaoE:ʹӒHڞtx&qCFFZbj.`No73 hc=$uݸqI,q=1xT#EňPr&x$J Ð t gHrQ}tjʄ4B@RDý kf%eF-Xo0;.J伏Os2`qp6XE54Df:<6up42lZa<^ge(Fv2;4>̆LQ n^#|UwK UOd<ɛ0)<`K]:>Ej}*K@1,lА1:tvC7>l2]l_=^(&M-25 {3L/@^РNM,Y7@'3όfQa7L$ !DA}iP(slLA-dr|X\ b9oo{RKmAiٶ:v3fJ3!Mm [+͛v7}n|95sQY>3)u+2u8ö$<T3S~LĿĸC(흎&f;u_ y1?=L}eWh>V@^k&P݃5__aTj7E<ۼ=>sXGlNvrΖ lwf *Ę3q#y  ,x j.'P%P D.gjp%yȝlf &XJ4^Zp`!AR_4 oQ63+PR2¤` ( e5BIT,3BRۙmfN KSkd>C$G(Qnt&Hb(j.96XQn Yo5Q)W!؈>HeOQ%茂{:cZ ) *Zs SX"E :1n %B# F%B"=HAWvD"TBXPYgܐj DMwnBFĠRC-Q8KE.z,H U`$Z۞$O/F}>'HnHj+(p\X@ ^iht,nU;ץL&M:@xɨg>3(sP Iz!(R; p˔G'c":É;E"4a߽T5%aӖFj b%Â'C!/i Dqg ^Y#¢<",) 6 EJ|c[[y Hr bfQ^c,bA9>%Ĥ0Dq.l4yMgBa]CuLktTI:3kn ]uե[ՊD[$qO9#(e|*""tv8r(Q4 AJ0NC_+Lw.FP`(~Q, y dvXPB/\)m H0!ؘ@-RuڂQ 8t@l ֑aY X+k%|)$Rsoն3pWmC8.%Dž8zǾ,a|.v=ֵ^EU'R1LwbnݔOO?(!rFBB,[JL.0" ^ܢ|EGٽgֽUt4gXi4j$|GR/8@7wk'&/D^9'+X7>~lݭ"䏗 : Z!&!{S8Cd eSo7ho/(jfS[b56<^Z1[؅]Ӯ7 R7K[(;oGw8%8@sƭkKEI2|j;jƢ2u$ -œy KτQxE0*HdR))Л$6!`B Ȟ0jvRk`,5aU7ݥ(elyn_RF\8,ZDلZ9Z E/4URGQ o:<țL*53)2dRd 3âҌ;dbBD X# Q("+.Pڅy(ř}J :a4XAyA)VF* \%FqPs=? Wg8G#n(DDKc@cj'3&BOH]fZ$M!)cKb8#;E,1vBYͳK2Mu]?ߗL൯pfO_8$5@̢a6QMHE{Iͣ`*&I>)g˅c1:*AWdۨsFo;,).̥O*^1K= sًRm{XNN%͇pMf?joZI:ٟ;.RV-6( 1z.JQګTORQ*|FŶ`)%JrŶŶb[+`ŻwAΕ=ځ:waQ c1F1s tY8.HIoӟvH<5@)iER%w-m,9W|gnf63@&f["eQ~M IJdXja[&KET8yBf4+[B0^'oV*W`ّyR~wnjꃫlLgwn3hw^v޺݄n8%sGJBLώf}}D.CLJvph $3ֻM!+r0Q_DZ^Q"r"N[m_6`\-V /diەD|BcD/{v^ϼ|B1k)+,(o#RU d`@^/q4 !yCָd'4fO#3hMr)\c3jfBI**B+WRBߕ!gMٙ|u\NT uG΄ 0&:%VB :3n6}E˫ŷILVwW/#~>]=vr1w4^/u%1b&Hv"%S1T'q+YϋJX WJ3&JlRe1s(AU=ѩ9OG<7ntפy|KNJ*wSfy2&f\~gS<9&-dB5ɀCzɔ9N3GƓ9Ȗ<[r>{QhF)[]H 吣0euޘ9jJ$\&DM&IgE XO! yg7s:e:g 1Q蜫1e#9~ .dأ{|[ț7ASO3`ԿUY3YtZrF].:>9ɗz IMU?rKnO"W&f6^Ζfҭ9ܪڬ̘X2?7Sae>5MC΋ .{6 |}cIkpO-ϹrF$ijq:7&hME-C-]$[+LHHR UP]q^ˉ/h.߼C٪f]FmX=;ҳ\oȡ5AeLAe5K#cȡ!1+ow@-hknXowbwa?shHCMg:L, ue]ڨ:!1w2::\w2::ڨƮӬLqy::ɂu&O<]^//zqDd]@+#+ l`,pk\9Ǒyt>5ww.B;U\Tq^S|x jrF<8F$gj^ PN3cOcڲ=eOmt_d>fBXn#"9ЦE!'!x%dOW:avp¦w5M5F>( ȮB>SӆR.1jC,lD j Xu֓@Jn>q'D-pbہ )$[MՠkxJIK\"&vN@6RK-1MPJUj=RRI) [9 7<,%`Ɂ9)6kHeHƊuE\gt^%`9D\qnoϧyw/ldJD6'PPIR5Xs%!'ׅփARa%pxy숥n2W (;3SaH96Kw8XKú}Hyޗ}:,z`ಂ~ x~z>[^ >6J D:ťJC"oM: -b%24ش!,h*JPOf*4bb^cAn1`O ' RaYU6C,Z tK3aJ<#Zr!8F䵓 Yvis)r0lRB$nlw:㝷woA)כd۸f㤯041{}bb}sL|H|) lt2`f-+i0 ^%{ P7mykYG!CDG UʇU>kЫ]NGIuK  ذKZf *'DШ5NB,#HNt9%.$\Hir7s?l. nc7c*n<6y2+1&QƬud2Lp)UyILU=0\#'h\V8*WbP$(ީu2!j2BvВHFdt!N16s{>¶n==^ ^E4de ݭ|=|G%Rp$!A*kFV淒HuzDeY:!I XiGgL%Tl. xұAJ4vlߎjt wڻ*,ɴžL+ }"38#Y 8}"ϲ2*?>>0oâ>!'W\NΏ>Mp4SbW+ +17ܩڤJ&zpeU38T8j :\5+ݔ Er>N`U3Sf}2ج \E^#U3؟N2 TYF_joV” I֫zv޾wV_nfj.y^~< g9ox fy9_?ycgk\VϚ[cx/9ϖgMή7;E0r KN"HH*^ #[$${ZrgTR[]tg`ɭjK9bއ7/O yϟJecmerM+&RZs2e , oں2X߮gjOg'?l]__B A yZ3kΔ9G?';;b.0++'ڜ})v;wv<%z놹D/joJOn G*t)apEwyDsǕ\N"%B` \0 5?,*ZpueC+f;H놹%jAQDp9TyoyM4+*3+\WOz *5D/O{K8w\ʭ{\tpWl8J0 DmJu>pʄlW&*m0\VPkT\!8&7 +ȍ/^+5w\A!^pul_h:'lCuhyg/̽g5nyC:v\xu{}u'O.kퟰ~?f猬D]-z\āp%# +%ra\ZCsǕ W@0)5:WWLj+os|Դ0#oY7oy?wiRz<)lRư<)lIMeu(<)uM_ַ>իö[S@Y ڋѰEnB(K!] հ=:~NhZϿ'_ ᦥ_/Dˋz~t./ Gg| n?|kבe1߫7W}%p12+nCȶ1Uׯ> ?ȏxFgvqݣh}3O֤w^Ƿ۹hH `Dp犲=:Wr&I96犲,/+Q\#J<(v`;bY'jJo\!GZzz"wzkTZ(32qCoԩ d]M/(Wv㡉jʼn*gV x\)COcpE*ap%j;Dep W:@`Ov\\cFn3;wepe"YJ{W"7Qpq%*gW/+H\En\qܕufK1xg97g}?~҂2fl)`뵕_m53GmS}~ `cP~2"7Q|ҁv.׼7x_k{e4 D-tz*(rW8Ja.ŊfKT.'W(?yq`Y)rَ+Qg_ ʭ=\f55/|Uɵjw%j=rqWWT)߭a\8zUz9Nz~q!-~U-o.,gG@QżH>r6{?[vy7ῥ Ogh6#=[WTq5/>U+ȖM֝ؒCIhz#LMrdgTnlw!+ G!x;}7$YSawjO7H׭I"/+4uy:W>tOT]B٫5`g-%*oS1XIjQ甙B3 ]HRUm֧.np`%^US? cGaWg>O ܍+o]Us)V,&3Yn͉!zZַZCB[%k`Dbnь}'+S4)|nZtm01`vR֖q[EEm5R)3HTwU.耽,))D#;*D{jڐ]jAƤUȗ?*]beܧXE)K^5\Tlpt-< uiG2~aD9qn!VˮJW ty6T:8ք]b1זu4DžĕfPc]=^џ. *E)VYߘuȆ쌏9TuAP#{*'*ZeDpIK,~܎>vnQ!\5"J+.P:  ʄWw#>VQ s HMQn2w%e:|=ƲdS,~i vEc s~e ++eLAAXw[H-'ڧX#,Lh=@;-Z!3֜`AwŜG^' sG`a.nKPI!0,'T 0jXٜA7ud&;9Xi TtgJQ"Hq;ŢA(xkϲwD"=dH_(H BrFA< ՕHޫ]D5ڽWSRE>R]X f0r 9*TB9srR"pPBO Yh&1^! C9TU5Ƞ @k9VwvӋqrƬsDr2|R1& pl^vND(m}$jyMgUxsۮXsmP n\A7z|oM/ $>ML[o vs@,Gtd)ҕjhE^B6H (yCs ~XQ|^ŢΈ A9JHDN̫ fo8dͭ!`$bc9Pd}GN3Fe=^U:r-}vuPX߻a2BϩEl1=fed^mOS2;\0dn8?iAϩ'33q,eʢ_+&10nW/=pi[lD1=imvjp)e ѤE,!Fl ԝkkaIK3kr|\JrM(+-;)e&¢$J7>08 z,|uqwqqZ xߎϰn äo&vrx%rr駟2o`6LY?n% S1۵+kίBvr1LVt\|8*8ۃJOtBw 17Ϙ~>l6o}'y vUm-}[,.CMGڸ,9wfUqYM r'(ATFgq +8`Jׂ+V{!&+q5y's rZq=&"PV+Y-j@諟Wgu\n/'7yjZ<Ҋq*Sen8Wpتp%#Y Xm88!&1ǠW,hՖjǕD0\MWD/zȡ/?Xnjp%jdPTF4\MW!JW,7;5ɠ_;D%j"\\ܠ&PjǕp5E\PxRiHɣH1 5G*Y-9nCg.́$w/whb[_>|%8lWNbRXY݂Uhϧ{.6;!xwʼn69W {s\5c&fe>DRj_.޽7/כ\ۿ9s{$bﹸ~iv]#绻g{w7Mǽy&?̿">IH>E,8f4@B2jaQ 65<рċ9OmxsU/|_olwt~V&/pg'8fD'uw)9'0  ޾Ĕ#%;FJyVu=%l5&aVs{i5 01BVs&?3||O~p3_ϛe\zˋ3έ[RFή]]_ ɽuYok^M6PHlvu&?3+@}ܶu].ڮ>^JO,=_w%VT>Nuzf"G週Qzquǯݖ"mUf_ rm S) ۑޫAt|& rtPx N/S\Pc:`Xl`E ]'~7zs4kU׻Cq#o ^zBq<LȔvogp!cuB'?朋 CƜ5f΅@cAr1n{>{ᦶڳ{=%n>ϋ~'D }йps nC =ZR q;E.E-Ѹ-hUwNh|:8%,\^(mu0wunap͈ޭݻ%<8%yp┻73c>`2^U]ř,jʟQ+JɇqA)AyE(-M>)Ƿ_؄sh`$eM,7ȯqjzOmV ҞUN\jm!'|8H;^RIPYF*[2F"\\Ljpr\ZLv\ʔj )bkVF ID.-b)q%*kp"DDEb9F5>qZWZ W/+̾$McW]\P3v%jW2q݂ ;>]rMՏi1LeÁOSdefPY 6im%0s4aelkrL"L`@=z0-j)zVFeL%6M:\`3rrZ~YT&4\MWD'DW,w T+QD%ZU6\=Ō^8JfWbbj %Q=\9* U]1&+tMPO2(rcЂ+Qr WUt;McW"8\ܤfJԒWq;Ps\瀏0G/ά߿0+ b eL`ዼYX$eX4ؖԔ_qYJ껀aEVzOfe z/wrD er,@Vɉܤf\Ԗ+*)frsJ=Xnz%JFW2۴qE>F@Eby5J>MMWpU|q' JRW;4\\ўUO'.'cGWґqjˑ TW4Wdzl!GEbW"7F-Uڢ+Ջ*r(•.Y Xjp%juZ8 WM^Xpz+W6Q W'V١"\`:v \S WU\*Ojp%rA Dm> WSU՜'E6m+ѧ?>3Ӟv|d@Ȗ#ɮ3%۴lńHDI&/E=]<],5ITAn*g{v%q O*)x 19'p 7~ˣdloPS *05y!ZXʣCt`]!`M٢p}o[1TX$譧{IߍJK~7 ϼu~.#L‒9Iu v\y^9EtΜݨl5Ѷ7(@D9֔F{{\5(\=t9PCiDtC<WGDw() tt%@Bn- 8<\ab+V:]]!YiA"+l<\M0o+K0Ҟ.Ez+x54 LCdLEAb 2*GU-AfA9v Iq!P\4p{}Aqc*%Jڊ.h*(I'HN)ҕ?T$8]@ZABvv%XER( Fjla9Rfe A :BA5,sA^t2)z.]O[˞UqY LlWD(NײD$:yBenD|6xNǏ?ӫ L#p. ɀo<D@XV]VJѸ+fDI]VDI \=tYmb+E!Еˢ+VȾӕG |=+M]!`)Iz: {ʣwte(Su֔DCW.s҉&9/Lg颅Z]F_՞&j LZ>RCwӋO6#]{Y΋/o _,ٿk$a,_z7BMI!p;Z5)~(=?_?.ht", d -)ڳ-9ݨ6pyq> /&Df\_ehWdj.tܗR+':WjM^R弤zz|=Ҟj+t/UV'I;^ u4<}_(9Hj #Z_@M|h4x嗕Y.uV4!5T 3m'z]ʹlYUU ^ckҕ #aځ*GڝcsߑΔb;m65%jQCsі$ѕmvT4^'f}n5aNcpS,MԘXZ |0ou+XvYH N$դhdr`&ɔ.MZsN()527LXxh]чô6 i4>qRoU#T.ѯ9zU' B'b60/|]%>"[Nl/rCm+XM!i `YeMuCr_aa\4w R#~J]bq'UAR>zf h,A !%ar ^zPQ`QMT_Z}vFw/-3OWk5JGr {oĩaj: Q5 R$xarikI՜s0R } %2$BHPI`|pL?MhVͺp g55 ҔYOO:L0 8+|_\'Wk+d!r)a/"K&ϳjY&9~̰ohj\GU>ys%2Lak%^AZx J{AdCry([?j$-xVہH\qpqh[oF蠼}Z\ޏ'2uȥaTuy.PCM__7FW I ,uegǣ񍝍űo>5^%?{MP3D!5N& `FXn;&rie{ PUeZ?fȑ+|L.W #cI ).6kK]QV^ϥbk$\ʥ(İh ̗^_)fj8O&L%v YD2 R( Fjg [QzBPθia`y\C6]/wzstT5Gصa>$3V+@FIF '6iyay.yNsʴy !:J !N4vtP};g(a5pnZKU%a f()13W WFrބ! H>y&= M#ˆuCtN_jE]'G"K [u~BhʠXn5Nr Cav^A&`1f4lDEYiwjNk!aycmN>ʚu,%b.͓ry`\kx+e,5VsGx|֯Kp.0yz쳢hM,CpB墺ǙPaGj.P0+ćtl.Oj١y~&_h\yA&졛- x"oI#,nM6*OCoa߸AaCg,DyMg_Oe`Ty<+/iAp[0Z-]䥺U|ǝ((JE .o@v%C0#- lzH.%lpNZ%&H* Y-UDPif_Vn;ҕNh#JGlx~J_%'W٢)^#Mxw0D'X3&sdSǢA0*[bg)`7YrDJ T~ǕWG#Uȹmݙ{㯽(iOabra#k"^iD5)xmdMdk/nrAJ#vK &h{9VH}vџ$f\@D7WDkNB%9r6s?fo-V:;_4 n1~J =]|s{q5Φ7Brs#Mk)wJm1<׋3sWn=h` B QwYܲ(R|jW^< {o S͠W4M1p*K>YXj/d~dAo_J"Q'1kZP;'/JDYi0A*2:}h5@d`avO:r.ǛF"iqں*U! #>q&d.Pg¼-]ڦh-Sz)} r_DIHAGDUTq4ec(;"XGE2%/.柅k$.GJt#r/=ڻ@׌H%-EӒ'^X19%|a! ä&_dfIM3iݺ]iS Օ$rL_u[z9^q;"P\(!vK֎2^O% !1wAT {o-AdhsUZ&4(C w9]XU sVpO )B @m(T>^Zz6Ļ]! BH78b!vE(M6%<\)xxWZQ\qm-YHUTϒLJbm^޳䥔3̊Iee.%Ne[nNu՟?$) d4[??쿣Y_m[RHeH HN\jFFc0ǿo0Œu_uW܋yCe3&%h,Z 1PQQDM yiXd MGB &*Ɩzƶ~ ­"{gQd,sF! tq!k Q(YCBa}s\)F )e86iu-`(@9@D\KI"fƬ5k \༶p E">VO6 -r->؜ s ìUJ0Ǡ%EEYP#!~餸bh`7WQ:,ԌsyaCn7^V_ENWG`@2/l nSMtᇠFj(ң^\Ttו3(~G#j Hfm*8_[Mkc8QVV+șLyܹ TR9Lq3 ݪ3k3)8S=<|Ơm[馾 ]..3Wzqe7/Re^&_S\/kOfMYSBZ *#ad i"Dn71H-/<.<bErQι``wyKsv*b J't›.`h2Y ֢| AYb5| jƿoUբt^@+TV>q.9aa{157N#novam*}oIfdn" 0Ά֯& fE H+&D%2ǏF ܊C摱Fo=C%lrQ'i Hl)l"+O!Cݦ/VTA\ť_BƊTĀ?Ї; )W579T@~[7FM"Bo9Ic`/dDV"M5LB:j_'א9 B*oה0.}X{DCF-N)ޭ,2ݮaT/RZRU[r!\2ro !ml`n<fb:I&m޴E50> H(N !hU54M5,;)XtXF*dQI´{hZcO^o6{WAtZe_NYSv}`mV+e~͙/(.!6 W 7Q%,uQCd}r\fN%ǛZj1H oxlŘ+XM;N h}d2.V^%xB [nfwJՖ#Z/_׫:AOZ{U^5[bS ˧ ᒇ;w;桍3R_OV;R98:l|[ڠZRvұٽZ" kQLR[14 N5a24`yXDxv;+@Fb{rDvF)_}[ Zm|lL/ JgMɁGn~Լ{xs;irT X\bښ֞&L41+1A4񠶣  .K,}ɃuGrIƎ7UO=<4ȭ.D.=+vRT1WEHNq>I h{ qP@+ۯF0-OQh}EŢ[bR(&tw\6}&hB +Q95Yss,tƵ`E ,Qث+>ڀCT%&Gr,cƫ12R0ViaAz9&]An>U_|g[Ovwqv;0 gڰ+3sX7s#Mb<)~-׻'E+@N,T}1UiAZ5q# z9.e aF='J;#2!kPWéi{`ޟu63(HH:$Ӝ$ X8zvsX5oȕF$}Li;UIjvd1tA@>^[8WM^_QO)HKҷME@4 3/i2G- 4jd\KPcsFjhc >p(l鬅id51( MSb8*o`(ѣq(ȢaY34++Y/;PTe3|)_9~:~]}=/zViNRǽ9uO̓ьBǿ0͙{"N&RSi)7,e4T$Yo9wX5L6(E"Yik؛H(/BYX YR1JٹT2u'dd5,jm}B+NvL5LzS("m`0M%f[f0 iu߄o61H I2 I_FQ Yo G._}5߀ϐ7%X_|tKD8l%I,<>J?pg>60|2rN+n[ \X<:(((o1G#B붣#s:>1 hX5\8H,UǼlç|L"pM_cwSTǨ5P\Iժ4F:yҕX:$T`h7XsQ3il ER ̹7sl6r=xhn+ʎQaf?K*!$V\L'ɤ #k`5_-ԵT/[AsF6VpR{Ǻp0elWU4nȋ".7Qo{m]5j0rB+ACs"y0ZIZVUkvyA<m؁7Ŏ~ߐߣe;si6od$$A\ⷤ1; KdžKǿAȪGo85=i G0T3 cO)$DA?w\{R4X, BB{kbmb?1J/=읉 L VnZlFqzf׵Zhgt]9a~f‚{9}ߋR]WX?a qgٺ$!f^mwPd_U.Ο,6!@.W)SkP+v'0Dz.3D^V?{ڛH_)t{&ߏZ۞iigvӞTL ep{ z3RmpM\ʭ53ɩp>q pbAD R% eyS"WEh5^L܍[ I;~zXVݩ.G$Vɽ 0K-r$f|n_#1@p]E >~Xkl:Gm3C'.sizȼca!.mَ/4uȦ`S|S ;[} O~pbit ɸXqm1"+l8Z|e4&O+$s/k `" _؏/&ko]v3\fUyMB;ndK.&FU;ޞF~RB$\+qݎ;+,SڷRpnb$ŪrO<Ӌ=p_}ѹ:9p $dmkJIxt<)F.L mypgAAu<L$FpJ,Ά0p8KA O,? ,;2% ~ |Χalf (ͮt0Xo}ip<> *qۿeg`VAԞ(`.b۞k%#`tgs7B3Zb{{ _(6}I}O} Xm$#2 %/^v:Cسpvo38_D0LhB'E _ ohFpEhE?)F^1tEƪQ2FaC21 P<߳Ef<~U58ߜOA6Ï'Y j[4橛-4SVjGg>tnDe!sP4ly%k|u 9˷V(|:uBx5];A>l} \!M3GZpC%H8~'_AHUUy@Cl8Z~f`I}I/~dAhxk}*cZPɍ0=n`GC1sJ0;hSnY Z}8$&)%O6Kl )Hˎ]F0ීǑT|"*^,#MggҴ9# *5a`;e}!$<>;чyw(ID.83edqiL)Q-jzIz}v 9L.Trvs^qY Ѥs7j@48jJ(x&(=+JgމbL.*v.lOS- !%M {J(?/ayVc wq{|,z5s y e{6 .M8|VḎ'1Kf G7ryAN ut'JPw6jQID(o4%' .&:R_]9-$9oV5 7:Jv̰ۧgd W3 " wr9~g8 0Wџ{pAӲ*`ݱYux$ѺzM}MLi#̆07E?g,{PYhrԥMЍ#b>%ЌpC6p{=@\N>/e 9578 Dw-#ޒZR]Qk1h™kt㞄񼶛NGv.vR_Yui?NU\б5tmc`uLBk fH49v6t_YƤK55<"2G(qB,":k@p"-TInzQz.+ۅی_u_% F-F:Yxi-f))KEha]Xʿ X_1S}qL.'DGIV01Jx.&JPh \cciQLQs!cDwCUʻ"Qh3k Ka3 7 ܆IĝpDi<-E$1PB#Gq1t<5M3L^&t2v"E!fQqGv\")FNWK88oaEW9 = )[41!_VQD#CH' ۄ84cN;5w5ANHq#x3[dbR$VJTU &4ArQrmf$f<4=֞[#s6с #H&uh ,``M J#>#X4 ry[ځfOdRPdc*8%V $Wp%r(.^y֑i 7&[B U1ADv jr(f\iV۵rz Ռ ES1C (d62 /p{I \Evs_`g2Xt-$F,+c, :d"t6ȅRZIfQʺH7#/y9޵NǛ櫗Z*2tᦆF$b9!pyW b!ܨê d,21JY{3و#4Ӆ!z)+`\ݓY,z!3'cwLЗHRE3XP4hѪoJ(Pĉ;,<̉(d6Pq,CwOr t"yP d&Úugo(LR8#p6Z*Yr8Jp΍ VX:SXqpT3vy|Uc1q| 9Q^wm wl@2ùj }1Kb1|>df664SNhOێ퉅8,y.Y>fM O3+UU{l)ۍz9C# j0 ,}B/=05D*rT""+F`/l,Iaƶ<~o 5\Jc>jCoI+.|af`*2+6_E 6= WUˢ lzUJ瓑EuS0B&C.?\"gJR&P aV߇U@f!uM@嫚K*@4c 1dry-N rOݳzQDr[ 9h&a W~$txlDP4>/wTpg*A7c} _>`o7 `LVFT Yk(1DXOhh3sgkKP1dNSB8YO9xZ )G6Cl5Tx2a>WApˆk?4ũuu\6_WPM/ҲV\h%'xN*v<;d8oּn_?=:7Û4(_=śWVβWg1SW;^Y+Ôy[Y"JC(4f 䓈Hnm08ObKVQ8ڌKK[WvQ02priY+`S^p8G}iW}6-O]M/ٹZiVcjO &9V;|}A.YcFL eݶ*vŝdo܇)"5lYjY=64O΁=k T@duN(*0q)̚l 6r_EK8FU5{%o9UYTbH#,}#Gӽ̺2i>x^bqXۤZ.-uxFQg|nsb bi#84TFiˈr: qoD+G۟ʩ. <Ã3Pht`'up(,T@H{@DK(V-60^/&t4 0NU(MgN}i@Gmj(͘%ț8Q{|:%Ƹq-tQ4}U]Sz3yF/ -/ðt<څOeD&@*bѼGf·GBh8m~} #@m7{~D5}Aj{.}\EaFk~7d5s>νIv܁}3}܂43{ sJP+X\Fx%s}gI=֣9Y֑5 N|n.`ok?AJ\>}8xZ9WXw,X0 Gf|Lmڭ ;r(L?~D9ѝgFd3WG!-pN򷫰x; z xU7Wx?3Btjs`yb?__bهbI [=C@_ l6D=p;B%)@1RG %0S-9~Ht@`i69eS R؎p824 i`43巿l/2'muA0@80M[Ȯ1jDD_T}7vP20ٻ޶4W?Mc_Co'X 0mL'O؎-ɒ,3_ILYTYLUKb+QuXvFy@bJN ӿF>`:zGxE&wccHpn mr;~O\5Gn84HamᮃBJmIB-|" F\1ʄrp}tREpďNhxU0+TU ̹Qxxc?|ɓɧ𝝮>r|]3ˬt~g/=s!"$Fi՟wXP%QM8 j2H`D1J4+u01u( s,$Z<03f͐Ֆ![NF\%"bv 덍L7qdG.n*i) >1?1T!ܽ[/[z߷|FzLU~I~g_ǃwyONSAkx:8?2Q"^Q4jFe;mAEޤ0t揊kd4d ʫZj}J h ioOT 3B^-JZs4!J5KREX|P? ubgwEec1c1D FͲt=;ZQQ!d&QlZdQg.$Z EQbMQA&2BW=keUMCVG˂0Oۨv,أqsZ3*8%J9LRè~QN2MGTJ A,7ܣZuB:$N(dU"XyM #&3xy heenRm ۇoC2O%v{^Vi W3Q [7v{JaBs,XJ 3xr#~AfZNX(m`#p(G7J5@uhaԊ`PTNl!(X I0&ÁwV(_ ȣaWA$iptl6 u =w=}=U+^aU+q*}QKr/NFut‘A"f+Qů12J/N\rt1I3 T<$+{7dL\n?lbor]f4W}gLFtu+fFE'z3驝M"2P%3jln5"[JϦn}-2c.-Jhӝe4A`ucKbPv6qyz+l[ڗc|}s+lOciN+t&gh}tZg}y5VW8jNpuC^l.;CeR՝|k{uVu''T*28$- …N"C:Cn=XD^e5ak8\2,W\azyY^׉];C7, HuzV*l ig˧Gk^>2I6zi8V3?s#z}k`Z#.5(!sY-5B$^[sp] /  Y2.+7(85/?h=YB8֏"g啩K˴m^>]gZO^އĈ0E)SᖎO ҫ䰊 2c HڋK꺱H[B5|r[C;;,MW$\قJD{ ̅8cX`C+/t& bg)k"@ÂN_kf[o[8ƓcO.kիLq8uJ=,*l_a xjK{=zvCZUX˶\>PPTQ[3"9|bG"'aG2vwil^y6αQO {hu'ys^{Rrznᢐ2'c ̊[E2\%Hb|50 *qٽŌt~Cds AZ Yb8I'RzmFs*d!v7_Vn4X11V_]zώ%J% ʕ'13 my!_NV XyV2<LyKMcpY %9OE6G> ss!2SD7X M^kMPZ^jA '3G\jEgӓ͈sPЪ.n!* 0 i}Jys!Ґ7?ZLv1aczh/C}ev[#QISKRܙ 0JM$<˞|jhc؞$Z\e7eS^I :0K0|"HfӿF_~A>Wu <6"x {8RBFnuO?_ Y( Vpd0$¸/skN`KY'o<XIBb w+hnB,pT +m6^R<_˗a]bfyDV6x} )!,M%q 9$a 6Y[DIq:4zގl^J(4NwaC{j+ P+X;x:*l` :'wgS$†ãJV-LI(Gy&8L}0i5^̯'3]Lo1]Hm#IOê3`dSL-p>5#yY߾^ -z7ǹ h ,e(,EF\%"C6ʙ>We+.n)|MVȽX݉DcRp 颏 YSjåUxK"Bn67rF(R~|HZ>oc 0U;k(M\kbނ܌,ljq_ ^G&tz^"xoa֨Dhibo˯%|)p =,?s0ϓh`KL>z[}|Z|n{?|=[d=S̞j6|g_`yMd0V0\|p 7,lNt21^kC7x wzFBbW!^އW)Je,AIȉ 02N#UeXJpo=L63^pVd8,6q!Z2xgĀK1vE7$ƊQ(? ; o>#וFT욦<=:@zATx!ڞkz_6ĂT`f$gNP \ZY,G' pz->¾_' p 3W2WYN/_^Ykw:\5yh_^k6˰Hq-c[5}ނCX*g\VBOq)&rXs#DXܹ}ܹ*b0`דXm}йnI5^l=6#Q44:g5=ey,u{uy"@Qn$UxFg BIˁ6C;yPP0,Lqjn b~cN4RVjظ!54T`e+~@X<ѢW$_>hkmE_b1'Ǽe4XLJbuBZ`!QJy*k+&0: «" u굵^4m 7__]pW>JC!',N pB1N ">8%"Ak떑"Ɖ+7.M81J^/ &LY#+I vk!doeقEe/ RZ "= R1er%,n>F uV\ԑ\(qfDR:8LZ ify /׃VN1ӌ1E&ƻ"$c1ƿ{S1cN. Rlic1ƿoS~D~d7hC|.j?ּiTUU]'֨ݗa(Mda%6$ ۢ]+Iқ"R2F˞&)gh*OŎ-D~#z74!PRVrB2զ@γF 崖kH . ʕ,'13 myV6Jt&n6W|ʛPjff(Ǻ|`ZrGK'.xdx>:oﲄ-zp)JBM&Oݵ1Gc^`@*5 x1klIH/g^:g^wo⿳=VL-zk8/tgbcÂ~Dhx:CDUY"Ua~DhFs`1` Q`oˇyn`-~?G.J5MayJ\?$_X{uL$լ\Ԛv|FcIdStF:쁴DS| d̰rPX[zpmc.Ͼ{S&9{.LH%6NS/#I'U}΢5X:[H\iī(|5GEnb U }wMBf^x4 Ԥ7)vn1MҽFKEZ=Pjʛgd2 )ar@KY$ĈDիňW~^X"ؐDJdJ)Nd(p ArWEfFc7怩p5wHM.nHAP(jxSsu)d0dL&'e2Rvتdz8z)ER9I")=J c~[^76L}OknifN&\涽Pz37uH!.lj}T_Msz1wɴx.!CruR6I8z{i%osm|oy`Ze:?؞dvw)_F0"TP}gW+.\U2F8$]iF 1+6coe@Fe26d2^ 7%ki#LaӴaEKc2K$abɾ`;朙h*\]X M+'7.%MNHPIZb[hf)͓ZP+ZdmbEW;hذH2;|$.Z؀I^nݎ g9ǿL 'ul\ ܆\sn]q5$Urq9ُP|`;c%֓ʎ}~|L LZ_XՐ3O3yHos4LНB @ (! 9vk 3gs}WFò+ /uvirS1gdOMhZd>qr%x'8Ug6-$9nO)N̝%FMļ2bsiDd0,;q\@U ߌ2Wx\>0j?9!xKWmw}[Gn8LhO~[l>姶^iUcw]BBDk\j0P[q'3T[FR>zyb; :yPGB߸05y lKmS›R~0Tq=?T_'NI'sI0JŪS Q=;-P UHP)Ҥ6= tE0Kd]qd=^WJ֭2 "U2ӨKyf+u]񥯿km=; ^| xQmMcNmSfɤZ.̓<|]q:7~2H~Ձ1L\:7sӲ@A6c|oмǝOoy =GCol T}T6|>vO2!Q;GA±SGsA,vrhAnTT`:u\n;<v ڌDy fwxNvD>9vy2;fwEq ȡ vDv7 b(!xP1;tm`((s!jF`v_rsbRarƃ fސGlW ~+ 7P;Pf؍a.ܐ#:WnPm6W!S;[Fʡ0#󈝃qnP;1PABl9؏vk0J:aѠvm7pv87ވD'N4nj76|#%6ݣv'Hl)6GQA>%+>gvsgn Ф<3^=ێo6[sWˋ-ů zIbbk_vQp L.bebק?.L!Xoڕlλ: tf1"(S:+G|:GϏiHP}JJT2btٻbI&o1kО*N=0 F%lG B".T 4sŦǣIut*_|uMQ d>zc1}p_o؇7G\,O򗑡L9ؕ28ekD}ԙמ˯9I_m;oe?npn$X/ahާE9qs^q)ǖVh=[ϫϬ H^ZRȲh?ocDؽ%^:_]G/UU)?0cbCd3 \oNPΏIv0ǽhM.eT F%57ɵcyL!äu(=ٜ|n 0i-^2fm6VQX$.4:S cd.u! M*z'GW13%=}iXJn^JgqQƙ|DEը SD GlfԿ3QG\p&Ɲ7/_0AH(䳳Opl-l*ۛ]j3W2NAtͯakoߦk p6 ?Kɛ+bϦݔG)3AR:+@wRQx?hg&(u\WS/O6/zC l:)}7aϯT_,j˨;V_#hU[]aB2IS~ydGwAuE<> I.(vVE],,D^:8ˮT[Kp;Vt\r ʽ. {+lDM `ԇ TWZRen*erU)QX{XSd6 wd>8[~,Mt߅'< fZ բHd N/]])Yw}lPm$gukNxdFpevz`LzZw2_Ҟ?l8{?ݘەmOtvxwE?Z,.?w}@ݓ>8-1^O[Y|x1@y+bv^LrXiw+^g?xG0?ݎx3=b9[bSqbZ3M5kGB\p2sײ5;r#y[P0zs_z['Ϭ6N!Ʀ~b陉ygqb(dcj&Q)ƒ(GZv +n4ԧ]9[6)}@)h(̅ZKYG 6'NMA}”rC@H7PGvl]CJO&cQir0囗k'u̠7Œ Փo*z*(HNqM |+֩ 3JJ%}Y㩅a&LT_ g#pʡt+s+W+Q Cn+T*U R=0/V(Fpa޺e&ULӇKvΎ_g I:m>h%g$ݼ:3^oZb! Ɍms.vضj۫X}H1FCѢxHN1I-` U'~[8{W4ESFxտo T[+X' kuғÜI6*Q5nh:%Y5ҕ+\FX6/5(qPZ4jX$ l@v֘[*,Tς*XcTL6Ad+&yfy=~+x7TH}99:z)C)șU4X/:1o } M>zR yAꖻ9i`*ZuSd ~~Omw,0=pyoPW8M X$X)>㲬9Q-Dq1?٨n7ۦc}b>lHY#'s=wT(4եCe5m&8*Q zUU.6)2Wʰ)ْK,kM")8K+$PP; (cMJsYI"U>Dz'+EwXN<쨥fM){JW=}Pޔn5KS}0 ʒLDDb1pק6X=hPr564Dž6N1V*)@s4cUKFJ7:@-MqT).kB?;]"Qtmqv2#Q%1H0gOm,B6₂dr8ZVV 9?2)^#JR $ʡȘɬ]Dq 1*,6)6">|sm!D>KQ8CzB,T'M=i,1s dV&L 'I4 oPq+6dZA"HV|Ypزx>3ѡ6oxl( /d0 8 j &, +27>̱K1RzfZĚaT󬌋)c.Lzc-ʭFv/)vםm>2>vd>KMS2 ?^/hFD3:'9ѬvN*>xO5޹OZn9Kk` ]~8){(s u#3XeFO kh?i-֝$$1K>۱$պOQ8,FmtN$=癩+Ֆŀ]Avzb*bR_KKgIK(DWH 4&'ȥvoVZ8%'Uӆ!G(:£"bl0hV)[VTA-bkR4QAEJ9sMT)<ZmW3E2v:L_T lY؛E4br;J)lU <ԁBV#1GDa^Rn;5ael$LQT#§%wnPP?%J]w=+5 ɪ $ԠwCi)RʯXbalԥQiE- ʈ'IXnme;.˔Wo|r~ Bs`@d א6^f)Z)@RUBGo&&>Z:W}z^)"9tv|EKxe")*|Mkm%;ZSt+fZ㺑(H2HEawftO[oW*V"xD۲X D"D"SkG̶K>_ܧ;~rc`}0 SH/ MbRgߌXo: Q0!U["IFm^G|xdlR4K@]˕XFh£j/&N#w:C%]~I\Ll";s1qyW.bq.='(+1i^Wſ:״SֆdV2 YaPѨx %;䔒hdO 5 9ϯ-'1v8챬ڮRrFT2pmݫ v\MCy9ԩR(5p*p^N%T9\7Yַ&(ԭNHTZuJ `#ی : kPMZ#ES8iـ9ZOEq("0tӼ<UTݽx/-Et.nuiFۆk.<k_"~Bsn ERw@yW7'{P zM1FE=Sbn}ܗxl!lM11If0 `e;/_`'@f+cD)$ c&|;!-i]"gz0Œ/f[GzvȸV%DܥUmó\Cm4a( @ AnV;ǸE]‘ˏu\-a+޼2b0$5iPCe}s̡^L'Di Pu$CT:~UiE̬-*ҽ[V_fz$v\ek {ҝlu4'5epƒØq.EExQu;rUcɾN { F&pDmYw)ȉ9saZܸfPw w,՜ַO5⍙F?@P oѭYsT{z\xvnz1P7H=vwEG] tP)3G9Y; OV!Gȍ]B|pRsZx(uwHNË_CG<UJ# k߹'*1wXgf 5u>R8@(& *[4ڻEhwPh΍!ݲ[.<d6 ;5:sōK9Ni;Uΰ`k1 nӱi$5g#[5kX$.81 }V%U>S .ޞv׭-d.7_xQ9\*cSS<g(Ks0ƲGˮ;մ>I ^?&6n>r5Ry!i Imp INHyab\fqLFܒ5-.YkZ'G8ŋFk`-r[+6\T f$s [ݔ};(~C2ϟ<ƹodDv5܃_]ٳUB%-m,җ?VOwSt\Ƿ>i_Kk*VߏǒX2:)H {W"s$b '@`r5=D2>skyvǓec;R\(^~Æ34j y495i[V{[p>ZZI^2UIF&P 8KRGeE^ⶉzܿ %^rcxLw߬e1dfRUʫyé4s XH* .:VjK.Mr,\l#.hRRN6p76f,H{keM#WU-o_!cxZ8 IcgkhsdJj.!kG 7ҫBj|,f!Y20d7%"[-D 3^ز&rzz,K8rjL u1s&'%9ϒm`}Κo'?<;CɪºСaT相i:g-XMjUI~B4IW-C R+ \NU\5 ݉ xLrtNWdOF_SWL(J$"؋w,EG`DA'JV>9Z.00 Y__|/z֯M.'.=G%zn}'a`L?|~@"zg*MIiAIJ Ѵ^:Kjxa L#4z;( X( "l%SX# 6مkq/z*>"pIe!d%؊ ȗ]RNMTWuF*yW vHl5KUNH)Jl* ۆ :-vF3+"oAtƐU0AG|# !PY@{(vDEY29sQ(6Ш2kP,W:8<^hJw Z):ǬSrY7S1Z@Eڡ 8C Mhf n N[rem{koqd~J/ARKEꅏZ%fCWJ9V=eǒ璃[d-dْEA<MVUȖV- {ZC-.Ȯ pGcQ}gݍE /._dyOngLﱅN[ts'i9,}u Z62,oB r2X{ ea2,bpvD`K-UB9Ypil3动9S04*RYul$ې7m b]y .؞W|AU(ZAdj'][ 93wøJ5P咍F:ބigbv֢xbxu>3F>1[Ow~׮;a(0}RUh/ca5q[4=2EOwxj0iJju+e}˳7O#|$6GviKJv~](Ĩq>hAM+6FCoTrfjg^z/L1q sx;WK)v1ča'd 1 n \ r0JgJȵ"o07{ر2_Szx d)eMo?JVk$}`UTՍӖ/Vtfb.e:` @9r)S SVRɝ<$Bm[d3!@ȄVl+&DXW,* bwyZYd-H윗OkWNqǡ my,\}:u7:y=4ߒw[dŁXP|pAY 6N9h9b9XT$Sg,5|mPW#ڽ0h0YkR9W@ I80BH"@w^Cb6y|8kjX򆵠w ۉ9Vhr@Z ?|r0G뾌:[WښDx!b2R+BN@ ^@9B @%JϒY4R t>2v{cJHmr+$sfT* %' E6s"]5q!’+3`WY)Tpbƴ$K!Q%A_3l^S¡!vVxy< mbap+D|&j>~gє~FƓuϓ;nHf`7kfzvȱJ^̾v釱(*#ʓk7^SrKDc)1?V+'I@vX>lG~$srזŪqc&EoS|*e75eef4Ɍ~L%M>Uzӻ܂ ݧc M(O@L'7gu7f{^{)7ķ){9 4NÅdh g»e Yv]y=Iǻ吷C 7)n05hRl 0186\ JkoѢVYAnbI>ZJG 6dVsU\"fuD@cf1 >՛Ϳzwp漢oW9%Vq>1$d{?7A5M>}"B$zNsܚiAY׭R!8"X>+Zo}k$bw8hcͬ^a<Aat]'N隼񪦣nV$9fo=o4٣܍x'߽ѓ!O Ї_\Ѩ{_Sg*Gz8sڱoz`#O?W߹RכA ys;(Ju6 ~[B<2.DȜXR{~뀠vyiŨ蝋r$bH ^Fvc?,s Wk^9WˣV*ļ1*U@숨E dW$k-(4R*Im@$j-hZF)+.DJUZH2ӑ1'nG,OgsdSSZg^1IWմcTM .X]VQ;Hxۄ:Y 8hz%JF4y XZ 2gdQE*fdD0tZ Ҧߐtf!¬"޿7P :] )=D Li082%[)jg8p+R-d+B:2`RXZ*IY zca`2#Z":y c|-;hJ2Cml-:XڡpRӞ"w!=Aȅ0[Q? ^((l)\]O :/ ӨA`G^9@ ˊ%CMg&%LmNQƪ:[=IMRe\kCKJGkm'6!CF]ח7;Gi}O6ȓycz=sxz {0]_~:boGlSd "ˉWs1?VC6$qqIn:~t6Ɂ5g[DX&}LXfj9 Xe)6* 8HˤU<uGYv %%}9!CmN-$,XU.V*ʤhz]ԒGaqX҄ɩ=d(RU*Gi*Q\*E9M8S=? !្Vƅ. 4.w!gh6"'V)ƐR)D#8[i=Ij qFȅgR@ѝ HOw+ G+ywC^MBڐݴTJtL5iJ6irdw&dnh}\熗$^~_Vq2PŜS[Y^AR%/LB`"9#L'`f4)IڿѤj $H%ѩd"xgF`%g@{4G+?LsfEמ.ʧ8)|p}rE*) c3#E`g_Fg#Y Ō@ΑE@R;+:A]J$U*:rD(,5 E_W`eQQ$MjsIɒ) vДhTv(߿WmO-$W{~u~4 WGv6S}3gb6*~". 褫汌9>u/bl@s6;?aNodZϜ5 C-Z4,i{z}ӓ/Q%VO$Yh yY> 9|dyLٿP1ǑxhD27OV\ƽj`\D6ȕVK~Q¸3B.*9nY㒑jؖX;ᨖݐ#bF:.m`)ji5rH>r끞zңdES!:Ռhz~h6x*>/ ~&dʥC^nwFeg|( n#d3Gc@W^I@e ^y?vDj+HloN@-XЎv#ID-0C^tbr!IÕm[I|Vy,;ʐ<& Q0ļeGJl4A5sȹhаh7|2Ky] #2%eIJK]qnNYvֶ ' [\vfn];b#T%mRpզ4(#LϮxvK5&ߨo40=9.cZ:i008;=ѻ?}OV4Nsrwh24*id3<wԱܡI mOGe/$?vJYps+^NkS:?N찼Tb٦q/a]?né/x~'1uZg ?9Xc”ۇCCѤ^g|jvr:7?7\$.d|\.N=Af@&:`|*.tb_M0sFU$Ɓ&.]Fn4P !ڭ \n \n |%9@:\_b=QVCQh[l[m9r+A%F}PfK1$Ob#F6mZ-8,V(Es?}d{b/Wn6WM^2W,c}B6{yHs .n/D[doYNz-p^7F~#ېecEkB̵LZ5uD,*LEm@ԢS\B Z@-!kcP0.eh$`.Tq֯(kܶ8M88(-G_ Z`JiQ tݯ|w1O#>ğlM#rc@}g}^;#K <0dU-DY㹷^`拘y>rgpf鯃AȟAyN^iU6|16Kش{Ȭw ҍ&BY Y#M)wϚ5 9v&>Y7߀_,HȪR<0uA`tBX~E~}JU6VԵQ59P9KeTJK%I瓯޽Ӓ[RmNXN@ƖiHs0Y]wv?zY0uyi1!_נ Yͫ`hB'iQ)ʹژ Y!")J_Emx>DG9!M9$\.Ɉ(#XiӑEѼ&j0iJgv8BH) 1fa`><)%aF6Y<IJ`,Ve}%ǀQc󕫎E7a.K}UnFx Ȳ2XLGLFѽtTgl\m`)ž1{04(rK]q]x4Yь+Qޘ2Uޒa^Ls.fwu>m%y=K b {Nx@[g>|6%?@E:5E7-w WٵP@ZI=Io^ j]nKN%E[U)(:[H:Y;*6EUCRtB̌ Wpx5En!J0%@b#FNoCN9wl:g)kVs_sJ^m!\ Kl|uIQJɣTԱ|fIW5s)Th(Q:F9=FW %2jρlSZU  I{"`džh`^>OjȣM} >\v(&U꾔* Tr-e=y7SKZk6tv64*Ik %5@5%Ğ\ũϠfDž }72s?Bm^!.\"i݄Y@i`ܪ jôE8@l痢;4!Dkdt#ENOIe=`zx 8t!Mަұ.I"5F&5w/n5d0g<ѹ{F*n!9ܠߨ} @m./O}rAͪ5%\Iq! 7؏q|]E "h:7pR6] ڔp#8U;e>u\2r:(-1"dMsjfK5,:.qm\Z f4=Y'H>!!ȓ]Yxgg~,o.ywܚlWx]`ٮ-7@3X`̋ YAuӥp9V+J kJ1EdB?TlsN'̂@(mxDR9̓us^t P?Q"bzE+_ZC/K^?BQ<9T|]鼏"~`w߾m068H>6Ë؋t}AOD->^ӏ w  }$PA:zlMU=vl0>/-Stǚ%0K>3 T|ag:f=:Q'tgC~Λ;#o\D{u_3?(ؠ5:?I^ B`޲sV!0 YӛuVkpt:bM!`s;l1u罹&Gfitn]z}!o\sOBK'Q Uiܡ~3Q.yE`ݚa= lOT׀\CT}.@TPQ_ޟ] 'ѩ~9^arhT5Ї3hV %6ejP?zҵ|]2 re jHOPQxO!s[D!) լbBP9>2-DTAx.[TA]}8iy+GQ>5pgܾpw*tu^lע]07~V|Ahs,\;J3Mc_qf?Gx)$i=+6_9͵s-r ۹\3=Gf;~vЖ&(sl%[R^|!Z<2{Ļ{^${T,5gd7#J=klK:{⧓ bF2[z-CW">*yk_]:t{r?R}[~x<yo񦞔y;AN!ﴠˌƓ7,4SG7:]oSGɁvPLT)@H* Vz.swɉ<9&9`t"p+R-x)!͢/ 9HfuI9 VŜZy5򍵳=x~7bCS^V ; Oe4e dRk `ދ ]).^o[F~.oWlsRz{Km/ o vRUyHtSod}6/_ާ5WՋS$C/ª†nZkCj\6Df};$^ہm>{ n4 P:-?#}|3xu_HK]*TI0j3T@y6) ?JWx;4:z<ã^h3 - =vo9AɫX shmb[jXW?QwLmBG@kU4:VYFX_8Vm04mᴵRNT},O%w}Z}6;O8#DU]A#g ڒU[M&0̛m/x}u__<4vZ {;+& +p+!ysЯ;tv_LwynuKnrVׄz.@էWXwh5,xPATz]3A2:)VA#!'5l N)XCޒs. Ey,jQdZn h ͩGR ,9y bd xaӹw^lo[ha|7˘U>gRUn#fFдޅ0cc}w*rPE: E3PO9n~du1(c||j@;=lr0֚r읶s6a&&A6a? q}?. ܄բ`0Zw!SM0+_gU9OoU0Yz6"_/H]FB"kOGlY}jGdhN|G?Ϊ wU&lSGWF]}b?~6N'{Y0&s9j#E7mm@ KӭV& 8c2S(݁f-Q|RNEqWӄVy \P~8<^q 58GIE Z('Ub癚810vM @ fO hBҾ\8[.4a(`6i:fhChwsm5|"+)TIXT?Z8F-hㅟK'qҡVLBtER$|18Zu 01V1EJSHBFO)A Р*f{‵$8Fhc&Ys {bwt?o0sCĉE!0ǧ<ĘMW\ oh'RHNAbm''%3T"LU6/'ZШuxe |%.,~ēh@sxz~ =2;=R^PAL*}ח%*QM%,/TB^5-lJ&yu SkfutZi!N,[1ADoMܨS[nO]51ۯFuH8A{@*CqCe)FcE&s ^V-X7a1i1J}$B{7Mzz`d9q(gҊ*:-$Yˈ6'S8s0<{5?6^1_ {j_`bQ?4W?`Ja_hm 0fY+>Oddև8o~YÚIUU"PE7Gnw۪&DP@sEbbEޟR@N6:n0I׍O?rlgiϋ/xz/jpA'w |ܫKpm¸~`:77=0pt~*Rc3mafR]S |1G><["2z!QDS삱Mn9X_H5[Ϧz $p9(Ykh+=]*+$.@-hLu6A3b)@hbFߗNxpmQ]Q" Mӕ8:TQXd2D!6%Rq#:B$LRj}ch[sh8+^W#IB_#[uWȇ ncj3%8W=$!9R1ؔ3OwW=U]]ILךLI6\ 9(`j5\[v%7YA%Zoz G6G0A35&2%( C0vg`nEɄT1eQm"#X]}ðy / :5oʢg˗-+xf:xV:#wʻWUth) 7,vǎƑ @3Zt+SmL o}cT/zk4u]io``4jE1TyZZUirg`C.>k\FEꁇ~^JӁvk2 5R ꁇÞvP Uˁ=0Pڵ7R#v4/t7)ٺZ<-:&``v*4 >f@ nUʕrVj-'ҁmoiؙM#0%1rZBlyBYT0SOpWgwi}K \]LVQ75 /uey&U^_R50.-g(UAtQK>(B,sJ9fr]@5OyNkW;N. Q]-;`IdՒ2 x `0f})R*FQ1ldN?z=>?o,nBٲ'&dQ{3".Ӧ͖׍i:.vyi+_w?Y}ks/tk#-HðRR[F9t)f&LbVVR;ˌ\y)ϜRWKD?H*f|xY;M? T!?QY#W P"^ bH* KL23BCzN%=#K3;a>TvU.ڻ}FeU;`XEa.#`^CœIDw3e.xV!k!H$du2:c,$zc .MW`rI^tg,\s i1YG}|DK}ֺp`u u5^k<>7K9k}f)p!ZV9W,$^lͭ}'%z9ٌRI%i͏ *Ȯ?:/j94}҅P3eQ4ݝGѳ󫳳6F_{ɝه*s8LO? s5eThU&gN: ){+bLdžl92L󸐹yz/"%CmwiF^(X .[QX[wy=j5ڽh;#r2'aʣ$+ʛ{k.f=CNP*$m\dP줃z/hy: 픵䞘WX ;m7=Dqfnb,i"Q# .Zɕ5Vup9`oEXY 5U5"eT$U zBxl). J r"6KCZaްr " :F0926 |49~ mEZau2>aɩoI9d,Ӻ0ND\ND $r N:p΃% !H~'jfeei,&{ X[uB("=ZЖ]fVI)FwV^"NL B)ɡM\<ېXDb$ q'x]R7~OP%,RDOBΰM"伎R f=i4XGꅌ^ǽ4#UuGt< $6"uM񭿩 !Nu> itXV 7͡i$+@GJny}w?QpΏƳvJK}8 >>>3ٸtNkWo m3 !ů8σz|OxVhQBRd.[[߁.k+n\ xAھ."U_%+6pg㐮?VjB؟җ{Z|ŗ{Z|M_a2X:ߜ%N”<&;Tr|=fkL.yѼϳ-U;:j:RdvE{(R62G4Y_Syi <_~e]"]m!iS%ӌiF4:P4d;2$)Wc6 x,[XBLE%AV 4%(>1xWu*2fnIE-: @2y/H0uĉJ䁣f.#Z B3EZ$'.kL3őZ%Oó/o|{s@T7G-raGu hԌ?02#e("eB8lo}3ލO5*{Qݼw:}V_i뜁xxp)ygdWuʾZo=#'q}[2R,{~u>pyDt }zo@3M؎|J_k5\^6(^ӻY,K?YrGva<#댽^?}ݨol-z˜NǯߑzHcD_L>6EjiGc^mw?nK_! u&'0&9;&uPwgHd_uUuwU*{Ú}GlZ \^A)9iy5@q뉀g̜C"D0.Yd9nfw2P1J&FۡI6K_6 %!GmBq]_)cBe T\\1Ox&91ŮUٺudn8hI4eB0rF2 S̿39x'1”? #, "~{Tj[K?h Ey0d3OF{Ii]orh.RH=,_waƏԠ4ȑ#ۛc!,$V3^StKS6x&' Qw"!]3TΘR,:cf;IxȟMsqT%+3GSr@l*w?% /l)lG>9ȤwKޑZm;?{g;\RO3Fx<kqBj?fg9,z_\C: QŰqo6j4o+2l¥Gx9/]дIxĎ,aq{9YS _@&xWnrv[䀳n/;j;jl>AN!Zn Rc(^wlj |0]1T?Tr S1I,3Q NE4Rh[O P:BH@ÕXy*W?qݭonIpN3Q6895ǃS)Нܵ<=Xͩk(lxx辪*4) (He/$L]M\O:2%d0e@/F`?dxjwiǩ_3:_MuO{p=V!~P\<} vVT4rfR D@r L8ǂɘ@ ! $Aȸ-qO)$Ġ8aDV'I ;rI?r.#VAF \˘9 C6z <)LC~c-i0B,I=jSH_I@Ch{ePT{!#ؓ*4EK*H͗!}إP͔,qx6 \`^IqX&Y5n2q0f'5YyV9d "5oeB\G=~!2(Z - d*xB ki)M]P\9g˺-HjB$ʔ,٘sLJOF6bf^pck[/ݯy*!9)OtJϱ+ ЄTб%cV=[,wd [`҇ \ ÆU~spXajPƀ}8 {PRi6KOgP^UHH?ڎ:ՏnYXd}c-[%*x np⅋1-M[r;>ЖUe-N܌Eya\ ;z q牫n5 ϗwӇִ=d+PK^*$RgW=i8XB8hf]b(K[MF8m~ kF}LMsGUuV֗`,"M"--XVs7՜{,lj}X,GR4w4rn =>BrsH4ʲE/ PI[O6Og|r+2C@z$lJ{g2;)!|4ibdXLBj|, iSGkϧAyJ*<\ Tm4W.32vUQ6^.7nL@ @syŝ.뀼u(FUy21?ι՗ /4)7sZ+U DNP9Vǒm Qp&3&ejh6%e^{8V\A|͕`ꢚÑ{/{W V?'.>Nj`w: M(Ni SG6^5-z(o݅siǙ]]8|>rܥXE>xbD5kØ%F"sw띢/hB#WKd\%j݉>Zx`zWM<¶p Ky{s:XMQb2$׏Zp,Ӏ8W̢χ5[̅/_H};p]( L˛d{(wMKWד ׫T $A5La*icpկ^riy7ݛ NGOs͓wC~iչKx??}zw7pgouz8-xj$jG?~ZU0N &W7ެ&~ hgLδ~0Xl95:x>_zӟ"N@ ۹;w ~׃Տ޼%=hU|)oFOoqJUo`vHln6[ HXq\I?3U3^dzcS~JWFy\O%$~Jig&wۅ%o_篢,^zB#[mjI+TIO&CYpǞ~o^Jqj]7c>&)Mxŕj,e>fjGl( Ac? \pC0F'v|z8eZ רAŗsZ69Dqb{10;tovw\[%3=%Ӿ޿رzr6/7h7oZdgSWF\vtP$zGi8G.f dEC/!S( 8W>'RRCw4YB,$Յ' s?s>sT2#گǁ;գ~ޛEˆL|u/_$x XMwTVC]")mpxC33Hmn[+E~p&nOnAXeڪem8jRjb+^p Jb]ǁZnya]fU&ݜrx?wyf9IʋAܾzNl˵*([K5Sj(_._5\.eE,^2!.ĮϔKƙU* hw?G (ȕXxfNq(g#+d?CS(wA(]PaC 5[ze_4 ڑXG`Xf4-wA楉/ߑ˺9E;P 3 IhB?&$c6$=+Tn]"}jh`2L=\(؇8Hbx]EĻˋ 6MAl«q9K(:VHgz8_!rNg_f7Fy\K"R[=CCr(pfȑ09UuuUw]RW+*/,5Ykdսҙ[Un׋*7Qp&]zC?⚒wk[ W%*[0U`s1\E\)*QJ`,1YbEp\ZY/&#J3Z+ 1"Ϥ} E5,X  Βx̴MA1Ge:#c05Ă hjaEP,0kD 5 "d kq.!K|ćDzq1Xzn1.AY0!4:O6,0$&O^ھ鬿\e= +fqIኍœ *\IJp+M5;+742%\`$ c]`qIRqre_BbM3u1ۄgy "/I,A)\%z8cm g WzDWcWX4Zߗ;WW#?Gc7J $->sQU]3Kiy >vMvav Yڒ);o<-O's,Ov"VdI9ȣ٠< m1!"[A2XY+`)uJ&dRF!9K1EXXS,U%XEAjʭzw0L,H`aAy[<]obx[a2Spb(5JL h`:82\Ro&UQQ~7Ӵyt+"m ;@3FqOA ͓_-Y/j?ndJJpBBwx̄MNZ΍; 1 s(yR@ア6a] 7:-i-oA@Viiq~u&h|3x-6I_؏#P︱ œ.X1;$b7ʀi7wPM0Ϊݑѯk|?E]y{q]3BS͵R(vx-'X7*GT,=||c L09)JgrFbu3aA1kHŽiKe01DzJsFi19$k\PD!_B*a"s$ A\NHZLeQsx"ьc/LԖlĮCd0vgٱA^ga\L !mt#AiKDuF\ iK7`+ńb%1^۱XuoS0ZH12pq 8S&Z6Gk)A>+"FD,;EJӽEE,ʊX.V pGw 9<6ӳϟVNh^<=ʶcQ<iÿq$>q9ܥaxAD ^]|^w ma&NUs&h=v%y1C4dJ2CSms|RZai \¾%밤h8%']ٟ+Y=9/zJ& td]?\'[=QFbrhLZ|zV=ܒD`7pr~޹X?8JR'mʠ3oO񬤻S$D%MIƤԩy[J26@Ys$cI !Na4Ju]'R$a> եmbac~wW1j"L`UQsoj/ Jc!Wx%HC"Ոj[lS\2cR#v;sƖ~4ZීR?-w(mQ<toô,p%j_b͊I-0BX\PE-s4D;FBSeݸo"==h4ĺHbS0^r)yN%Lnwku} qVn%E!6\ BgBƌ6d3\;dbV1S[{ћ js$u$V,By~WEc:yElx/Su_'#/Rt? G\?EĒ*w!6eGGu B0ohq܀] 03O'ON ˗_[yƈ8sbLP+&ιcS ɩ7BA(rf/=g~D5*кvG'B܈_` \nZ:glI&Y XL%a.b D&N$})6sWd'<_˙?L>s=u$PXZ`7% cBbhlVam]ٰ%Qn󀹡“߰X"~TiDGO$Ye32d@)5XN AK2aOe#'oF64u/ZXnK3 Hݥ r4iDbXĐ\ʴf4XL0Rh}DQ1Rvk{6>߰d;ʤD*cjD3 i'HY )Nb%5ޠW#*Qnj"$xQv1QJaX\g0{瘃3ijBCvk}Kw;Znj}0׌kDcGJ,DsF3c1Lgcܡ^kYNV8Rb(5!"& bKMZg1(w.rŤXwA vgELME|. 1T x"Wy%%$wIOOKNd>P"9Ϝu|R)i͓۠˞ /cOŝ TCT RP: 5Q5kOFG$1R]U Pj%Qr m7oG%y DRwZs~]%*5(BVcQ 1Z FRahg( ӽYfjӭcl;G3*G6ňT"E&aETd-i-ZؐΔ?S$/] W6t ͽ~Q!s^q` {=\L/RluS6]+չ4x1DVzLNxǮ CcRs=po9o9\9z\(9\c9ws8cCX;yhB"7z^+8JbW7? @@*0x 0yݕHMցi*WK0ƮK1GHqL}e=1|׿ීޖھcY<>߆il8&kMƒoDVEe{'x C0F-x;$&:il+rc֪}@{-)Aw4W#OܛLWo\Dw)MM23}(*K'p) s)ѣ ӻx<Ԇ{>yQ`,ٶ(>@=.Nlv7=?;`y;Wѳ߿"&avY?I}v!J f\k$37Ņi,So1>g*ߝnmCyKP]BUƅ|q4P9Lg Lĉ-a(T؍qy(D X䏫ҪUF.TG?MVXd@\I欈cs=UC0RJiU_>u,E),W7QrX".< Tuej{H_% ed,l'-ٽ/9jq#:JVO !9ggc [8TuUWw?E=g+}ssC,@BNΫ'4\2yURVpjyio iAPHD+ށ4H=Ik, 2K6X ?BࠌO}wVi!*ژU.g5afe_,Q˵\kXHV.{!^%p>m^ɇk~Yr 2n[5*ekz7  F٥v'1 cLȁ 0g d=9fjYn?iP):@hv{/U!]aq̻:U/% *lެӾ WS(ZdN T h E85[[EIuӧd+:=a#vwY|tSYY.w}=t6㊱Q c&bFǹd\Roَjۧu fHJ~@Qo.|Ψdu11HH@5h]ɉnIZ=wht!Fخ'fD+a4)UV~zqa=J5 ;Ey[).-g܍ȭ([gFrDP+U5jjaEgX!0Mhu(޻f@D 3R C+nQy%'{[3zm; ԰bFCABK'nM'y12F`gSaqYT8VEvlIC+)|)0FGD=IR3wW*1aX`<y~_.ie=m՜06Q!x\ΒP蘂A$m@ؤZ%K6olZo<]$br 2v5N3juOc馞ոJk:qFi ߍEN6,aFF4F iQ-[J^:8;|OQ{5@}^؃`ec]l"5#W&kE}|x]y[V#%_&#^* 8ɫ/ڂE = 8S_F;HE{^RTru$>FSR9?9eBpPt)>w pIuk0YLUʤ vOhѥi/:<_]l5C^}|':%_E(Sej|x_bX\ҲG0 49撓 U<*" Qqsr4W%x{(_orM6_E^?=]mynk#L&c 6V2VGtQz3]ꙪYTJ0@ɑ1묕(<%h̞B!1XsR;+=[ /;y)vC7l40^M@htT8ZtJCLrD:FiF+!XsAƹ&Kƒ2.F=,0E}Y8-eO-'I)3`儠qp]*+HZ](͉5fu8w82MJMxNLm}_-Nٕ[_)NI=w>87rI @,|dP[J&¥TGV)jm=A#5X"\w<YUvlaQMW8&%e-5}kʾSC1|){KA#--VvZ4#* @_mhE5Dx$8za])* UeUi\\Aʃ1woj>/SMpv%]UeYY = !(n: ZNp: 0Iբ4Ƹ$'1ּ Cr2e d3#9ۯo9| b^ŀ_Ƃ):Dz":8%F5XP_ TbnvJY/s"%+Yi624`L1H{gVH3qBwH^8A CHO%ϑ!ji|Yѯ%͚I¸tHg[ OPUv*dy\f/!'uimA \p1-Q0 z`FS`4vX"7 ;z"ηsXYuJ NtaH.~>>:󾮂F;i~{uu`矛u]﷏fɝ2;goJBdQ<_wyC : JF$')(L eG5BD #n&z P 8=P|>;Tsr9CxD@Qìdמ>[Nzn]l7|QxW$ډBĕ6@rWx 4Nq|S7.r.`kBʖ5K8IZ1)QK9ΧRbTv2ϞL;qam'Nsmm3D\qX]l(L9u}8kb+cp% \;mpjƸ}#ΪK=]iiԝ97puw|ּwE/݉\y\*}gYp0ދͯIcp  H7D>h{]%i%rY&5VA Wk6fgVզgg?^}Axן?oAK{[ `PW볅O]r2չԓDLv@{neҒ3fq564-˧,dJ}gE߬I"`lܲklN,=mLZf)ВHW2lI!cي6.+yhkp$l5iF 7E+֐qV`MMB L^\]/Šs鱡i~ėG(O rv|S/%usRGIIXFt^~Ax@l XŻۣ\ߍ}JψR 9mcɕ`•vO6@??4{s 9=tF8>xTk{k `<~iq׿` ^߳S:i\77o[iͽ:V{|n2cFGZr&jG/C+}vd|7iE_q uxyɄ\vMFW} : AMuFFZp*L]wz&?~Jڡ&*,N{먻ۃJK!L|֡ݤ5 9CP1m̠|@iMĠr0Xq JKa5UAeuOF408 6CQ&E$H*BAyɂ[T!IHn NEqLɑxteyE$KœٞfhU;hn`2:ˁ%huL(ƛ^칑RSJ3Þb)I( * 8EWvK4 D ReDBiS"c8%l&2eLOj$Ԩܗl`#Rlif "Uיn9-R=4V;?;v\T-zx#Rݕ(#"ߐg Y됔 |nS4n/䂒㝽*Õֲdg (Uϙ.^|ߗ`ǟ>F tno矮?'4V!/VQRԠ j"rLGIw%c462>F-K ^1xOgWY\|>[2/Qwet K߭:iUv.lrA7ag@o8= |[ >A86!+ R(!#Ѷ ҏO߮/F5gH{Zm ,;4r ;5Nl™C約e9.wO[oGE(pk,*N"9%D" dX GW< l]%n0cS|# tg|yF{#$q R#jO'C:(B)xΧP3K'5nF|ĩA,9ի]ԓSl{;bz LU]i@nCL7&w94Zfܟ6?1H!樛pk戌HCѶrwDiBqNSu&<4Փ?죩9{nf;/sȞ*'!rsX\~.燇/Jյ;Vf>{s~oWswy)FbTVS`6X#0QB 6% !Ǭ \ CMS$9=绥ۈ&1dy>TY%dnR*_d0"p8UR -|[uga"j7CN.oOKS2r5i˽ꉿY:#]'V-ݭ^cD3 {#z^S M^]ßQnpmW/aj ndLF䗟_M|q"jwL)I2=97*`&E7hv49ޖmݲ&1N3#P 1M%\\Yoe Oof4cFIy4bO{ cuW;{8F“JpVJw8^+YZ1@ Bf.T8u^n.7L;__GĤw#63qwa~s0r4.݁PwfKzKi\tZZOdw2ݍH!F_ WF 5ЄQ݅ R֌2>N%rq-]<1t1ίR&q:m.F/&X+5e}F(3նjl~7jbg1S Ag?{B*9SJ\>U =([fF|ͻdygg7O/?@Lwǩ՞}2hȭzt/9D﹒(d" Epgߓc#Oŧ (FO3iMB L>,{@7׏qC$zk'1DQݛ!;ݫe'3)>B'gԤ8-D9&EԂ-}H.1P:#蹧Dlfc}Yp>i 痟=Ku_^u"5X-h1@6u=-,yrO]v/6͏?R(&#\>8Ԃq֢2ԣp#;L╰I l0) u )uiDVrQE͂G ]1UҚzjG֪:kkvYm,a7i4}hߔfR@gr(l̞AP xҕp5,]k4hFةI4(VxP^P湱C9w0vhs;Ϊ=(i6wЉIi;`nEr< rMI ރ}'z,_Rh^L*2.!905^$˗˗3AzxfU{Mƛf{S%PR6T Sㅝ:VR1UqV4(980#w^tr Qc{JgՂh bwqYJh89}JZ]ATЊ@o;#wێ1zs1ˬap:~]ŜOfr}IJc'cxXFp423*uX[i"T(%=!Q}Kb[}ht]5J ,HcL^t ]Fk*ԘLJT%\P8~ cՆ0;#XT;d=SD;5&HFm5p.h'>/'r4t32st̻|%ӱ˥V5Z|Gg1xczYh \<6?0ru[ecvWٹ˫GGq2 ATH%=suCq\60OMոjْ9[G-  F.Ғ!'Э'Z:f@CWQ L\_/p}; *g# #Q'auOSY:D/Qr G )ZqZP(U$Ӕʨ7 AbW!JE`Q+{|Ig(4d9Y;cJi-* qC!Xx>%i M-7d8mUgVAҼ:QWwAt@UXFXV$  3Ehx%)6UU{| ~ 7zJXӺ#tuG0\RǔeC(U0ހԦ GHJHx.<?`& <r)-x-#eF[n\+wgX!iXWqW3f{0F\TA+UKʰA+W#b6>2$PK'}RD$HUNJd,[kSHF9/djwK&W>I6(`ׯ{RV(ŗW+f3yJFNuu!5,g_س"@'z:-=(L~y87:!/|~> 7<EkKP psX}@ɛҝ0sIHZv. # UH(O:T^;Db(Q{KmKPo*vA™Ұ U|M@RByqL sUnH؅*sNGY ){Hw"CĉKj_\ZI*(GUjtM͎rCF <?]8jN%G]S^"v9{Su*{FۘD~u7r!G kgfƷ`YV-ш&VW%[\[.Xwк8ZZ囡Z~<>N$x@ u96z0[]0f.afק/(TQAFD'mQJA-n=rk>^G*X*$P%V\i!0 |'̮r +>QG܄ lɃI毃Qd8W*Hv.'kђ9Rd2lP(.e. ceτރ.>O+ Bp,z߳x@ZtKoWFZ{҇903Pi<9r 0ѭs;kCذ<\@73prن5u>,~Zov+m:(0q) a)!12HL?oNTM iv)-[`o_w H]R12fO] =Be<;%K5k俫`luv#{i ^.AiTwP hO=褴 AqOa}n}Պ2}J " e+$A>Z)X8'*@QA:׆ Ln}5aX1R^nBĎIS%h<4X*p"tJY8A}(" iSjo.ǷY,eͶE'_,A|A:/56pU.Z*PFU$3 <*t"sU :.Q{NOHAk@_ `|]Zk׊rs"!,,7lrDy5λp.mlkfq_PO-w{h)W/͛r6ux%Rjz*A[ptVV53µU+[$^VE޷[UEVv5)-d.(UC(^ n8Վ|U~Nd0A`.*a:9a.`, o)أsK\ ?MSZO/->;̗?q[4\H bmQy5ύ"͹ÆQm7(L|^_ϹCⵦCH+海'Z%ᣂdcY+AT"*bQ9|;ؽgX5#ydm)}q @M˝OfL?N>Ի`*\ |sꬹzr*mbn^nw[qg7aryY\8Zh8u>prۚ-h{_{_  ^NX3awa~sBzϘ>efY&K )įrRξ|U+ 単IWD0^tS!EiL{@ixg| &У1:?=R6k*e+]_%'w˫3{wBߟUYh&|rR0j,)GǍCٞU׋^P𾎓t\Tv/OẊU4d<\Бxtgɏ쇳KH<^b4f٥gEKiɌQatۙSAxPAqػ޶r%W  %ܗ~Ӄ\L7q_apu%9E)%\;GuP1Zzᤐ p *$J'x7pj5|h`@4Ѐ0H(*t{sB+,ݞݞ{O{ol7G3{}xbSV \"w#~{`sJ0˛#KS,M4Sy\\ kkHzeiH+K„༈'>汑Qb괝3Qכٻf_\\Ҏ>T;3nePUPl;oRcELl9s*ޙ+ Ο+_e0^|0RLcT1bÑc=LxcJe|uʚOB}] Nzjv*:xQ]39DƄ>>V)Àt:VcgFhF+T* Ŵxa%U* w[7X/ɃE%nD 3fNqvBs9F9Dp#,b^~b[o^v*R8qELjc*nSl5RH3GMU|!SҚsFȧD" ?ϏsTi~]@^}&,'PHޖM͊Wg1Ue-%z1(}-XJj| F뮮n$-wc7W@ ?ʕi=*FWx $*11#TxpR|d:A4D%<pPGpUG WxѴv{nnnTUHq>/o3t&J"lP<ء҂'1J"* xh%:=F܉Hh溲HSł7pHq+YD2P ,AZ5[°uFEZ ~a2ʢC#6#OIaU.j=Bindg>M@Y\%a<_w_uf,6w95a]樶# UHs]b֭9SG6N#YbBs[hN6ysϺQ\CaѺ GuJhbƌ"(O۷u+&4׺u!߹>)S2:!ok9O8V2WD\Yzox#ܝ64D Fr4g s%\Шf9/ 4J`871b=Z$ty)i:QEٮ&}bb./Q Qvj"\a49CQ͵z.b)!m&j%(NX"2)`hJZdh r!ڝ$`rjk֘ _$:\+CSQؒnV:k2q ~zcH6 1ǼPx.%uמti%rb@_Phur4hwb0~4V+Rd~J*r<f3(qGeX/n X=7W DY ȐTu7װ*-S(al[@0e)T$#dٰ_XўNH d>4H)b> B;^ `:dP̂ t͘#xm7{e"c7? q^bxwbxيo?XC5X/J5{ഐI-F5"\2t;K 6G)%oH|b>ˉR~sI]P1*#>X󚗲\)Ud@eY+z4 OL2) є484xQ;(m/ϧڏ)yMv,dm-l2b :G{˻Ђ0iAenOűNZ|iFnxޥli̗V^AVZY)-YZ]Y[{uc~o4_O X/{y]"M&j1c2X6Q8Zfĸ輐 %d#ILhų(88ZNL,DQ#O%(R/kRG#ZqT37kx~BYNI6CqI1)tiG:((hΉBbot%'؀|O&`cQ Fo 1HπKNy$58xi=SRIl@Xi$IUlXX.<Rj,: db*p $a8$c2cJ\Wiu{sva |t_K~F>6om뙿letu=W#t;ǻOj V#œJocq\ޜ#k0~@o?ޞbZo=o/.W'.W_3T_\-WW> ؒpDxvy`Db w2Ӭgre)0*~-&\`ĵfDQD+"BJF.`'՘Dθ "mmX$8X&\f0TO}+Ym..3zq7M-ڵuF Q+WRmvVS"@ L7XD'T}1&1AloCQ3 !XyZzO`p WBDa>-v1 g8eAnfIrGB[@K/nobws{viƵc >Eo Wi~F u˼XE̯4P*]ˍ*wg~_>Weu5 "T.9ZHj ($HQRsFJ""jŬ/QP&Hj7v T>E"J$x5%)DE4tC`3PXwe"| եH/񝊤4ߩHHYO#DO`=Ҝtyڮ["K|Dϡ2h\(1A5NAwC_꒡̽`*c/Ԍ?]ms6+,\Y2_T7wLrݝJ.)3ؒV'~ R7(c3[/&4F!5'M'!Ask;MBe@cҴ{0T`!d7ޮP~I^ .B3:2i bŮi(бٻTQ~u˵61!y~>>=eqT$p@;P5)eB1^ՕnB~͹G-LMBIM[.UjO.~F xю[dX`j',>Mf/A? DxH46ɞ"Q,aC{nXWizJ<("1'(:QB6nk^0WBi~N200'6sRfw:pP :P2xyCFښS>͕hzGFbA%hv}_4(ltVD*gZK'[jûR3%;\Eq9&uK J;XPM/:iZSDU貖JְՊmH᫕=%|' /&Bt7orR@%\suЊӛ#kP{\Af ]9Hw_w|u`ca΁& yF 푰{ %G3eI1G`>7>'}_+Nb% b{aE4|7.YǑcD4~WlEo[tVVasgԄ:Dڜ:1eez!E|J` O6w/_O"TDQͺU!hm~!_]uHb&RQaB xj7s33ZGzb1j$||Բ*r-zĄK]FRɈ$ Yb."@6 qY.Ejf`bZyh1r^"E2ѽE3&v9D-rp;q>[Os-1|bGu>LNy`ieRPOs3"2ɻ9 xXTd"^aeWQE;VjuveaJ~!hҌqlcfpTI@:IcD,Q^ed|os^am!s`(Qhq/fqx0vn<;KQ[4Kܘf;595@'HLW] .Eg<2Ej%T*|?],*SsbC$E6Ec {_xxTxg0:'YSHP8 ue[RS"৤XIt=#ryG 8#qE:J> bdܸaRz:N$*+wtN9C9H·3L5Ls:H:8Q?'caƇ WSjRA`V6f&AP-h}\i6Rk_c4|p_MX+vG(qcs,,jt)ɟO]>HS޵UEX?/|pi- cXu5Ccb'gTq4o6NEuſ9>5_ѧ}C4&݁B:?WޚϗO5|٨OčN_u5ٙ>.PR{-@"Dcނk-fǓpaX|ڸlgxY__,;?.?٘YY2O XT}`$L.??{sy͌ ~5LϛYc)%B$Ln1^Oog Ao=תGFBW/nCA%Rsq?XG2?\Gѣ'Gbzu>F]ᔿMA3ރ:>~-tsBT}6xJHGѩo ,T4}>w'oD6O BJE3D 4 Gq&cfu`=F6MEևjXvjDQGQ*h3|)vISp 07E6Z T2By튴+"l0pXo |}R+4fctXB? ,(0ը_?3O?Ґ)?_b-P FOpZAuL $!KDsKbQ)2&X@- Vcc<"4Z%~aKH׌*ԗHMN58B:Ĝl;JF+<olPp[楎 a@r|J1v03 fK))51jDLVm(7-]3n_r :*a:0;摅@ |"l$G% 0#x_Q̝B:ݢ|"ap3r55TݎSz7Z8q|=mZhԜAן֭BRS0֔kU1BZa-!vf>AH NGHjTu71䨜D}y}ś-48x{ )ƃx4 LշrR"DSQ:RXXqf*P|7Dk3Ur HW>q{'Z~HU9?X2L̬BRO3@Sgh `T*#*mJQH8gBAI|6dV T1Z %1p{ǛU@M*18 zG]Iih5FL'szծT*dɽ=lw55WuQC@Xgm6 "l5?;Yj((RyJRaSg.3Z*2L92)*0lZCXIgWNWjA%J~RǓ=]_겭bb,%;RuD79)خbV`;3t [@LZ`}@sO7=4:ozoʧ4~$J҃>U)e*q2eIpHuIT#(BdMfX g_~^G'fX/;pm&ooލ'{L'|7v?;fB!T }l8TKޛ gC4t3i<|AQ| |~al}&pd<;,r$l#c4JW((}"Q T{vaꓥ P 0OWwWU`*]JTzL,0=o|pi2u2cmN.*Vyz]ܻd/gj1Vo^n xLQ1m}/`_8;Z-&DO֚"OD6.zݖ׊-wੱF2GH+X>k|2RݱH`WRt|"x}Mչ%}{/&y[clJժBA+FWm1U}S!>R:r]r껂!q7sf AqWUAMn<#*ֵlyWs_-_ahIQ+$&WTE?zBQuxj *r6J?&(x2 RH5|ku$Ζ-JR7`뀗e榸ʹD H{2]LE5o/Ml2E&aɰu^ jNXOie BhSB";"<:@P=(%nF„\A"f81S *zVsuyBuಞM2,)"ÄP1Č8eB +ѩ4MS,2Id\8/G*x]PXȏ>P~M~8aH$.B8j 0fc 9: œx' (/Q ddOЙuF 91VDvР1,H3F>h1HoWBŒ2`GDWF̥K>jN5[:C#Bg,[>jL$)v&8:2}{#-x^KIKZNekD8V̀AS3h3<\ }gi籢)v#t1%glZ6y))nd~7 %+Ip}c듢Fjn3[\Mɫ/v HǍG +p _m</:L>o~^?2}ϴ`zqYlF yn+J}|'m`hlɲޘ≞F<7 t/ -jbpuPkmISCd ~[|:%Q!)`&)(fAzuu!<-$M*=2Ř AG ԛ1mֶ[Wi׫l5 ߽ʡJXדb_37wVadeC?-dgq\qS |p3f㩾-Fd_H)W_Y{h&Oŏ_lƝ4geI__XJ Bh~<%/:^+ V7<)VPsENm{W!0 8 \I/ L&D'y~ " HP;HKD\%:Jgn9ېCt9[VDUwoӊ|<(ad.zmd 6#`V~4 !Yv3|q8 &_.5,~)y;wdǣcQ=w6y].֫kԦjělɎG k2G˛8nƉ#Nj}I;~cd6Ήnϯ՛r}V9eSUT.04"DGP q^BV SKotȊM0LW4)R=R )0dRj,t|?_#"H"#p_o>zav@E(%Ly.qD."6 EЋx'70,  "KJs%c`*\Q߁* AK[5z!CYՈ>m894Ysn?N.ϰnMx:w^wkݦ#l2x*I|לAZ~%wqx?ݞAy칵g4H!=&0;Q? ";x._@8z&1bW#L= ?m`&Jެ2Z֗WW`)a{x6VNqb*UL6*ߚfٗ=A'5w&s@3 |vt(=z#|;N + @>\:E !_N}Ǩ3~2d7Yy`^o]n |!^o]pULТU vkPk9zpĿ>;xflYrCCYk*/+e|M PʞٽM@@$a#&uq6ؿk ZN8#{,b]8=ȑɛpgW[ if#f:I܈ `>X0 qAm u?,hv}b~~jJ',à%3g-+];hV9GgmsߔaFObkT/MNd.kZjҷKB`Fqb>0>Xwz͋Яڬ 5]$"^ V|I(?&QE| Dysg]ܚr%Q[Yf ĦӖi*~ç~`Y~)!n-QzpCɔFRFE` X:)~1kOu/gw]4+_:5kۆpuS: ċN$_!M?^Wqg!)LjuL)TqQZ ~kzv>'Mҏn.Iw$I;7udUh>~4piSI=Z"Ags1Bч ZzW 9 wҏRH׊@TiArр9hK[aNqf ̦Rvd"Z4G0ABز;xg))y;]vm$ L0E {[eSZ0, KF#zr? R -8CY1y C'̧ s" G.-R,Ɠf\3玷Y%" 6䩵#O|! [8, 6~^]MtH+~L!3>t5ܔgsa0);%Aia( ȅpanWP`7fjD#.E28"<(\|枪g'=Tbne~H9zUB!,$4KWwqř4H#msg[^^؇I{,1,dfVa9NrIYsn)⦎\.1+p#9M%9zb]a^64+%\HA]bsa腷<]#N/͙DTsܔ^6],d*AWX\n+YojD(D-̆ 晃K[4avYa1gH-/ șVDYH$ʰ.|,U8ym9|!E^Jh/'\׋|S oQFF/٧EZIfa&1zQX}Ni$da<= +Ɗ:lOLɋ>_(:- 0JQ1BͿ`\ W ao\W !t=Eq$la=J{AQǵcgv+%•<'&)9(_*(X.Z[Z 1$%K#>POJ ;po|)ʹp,%5s3+>~>M 5Bα =|y|u$E9+uǡ9 RIfyOLQpV̩~rg`?D~ARLpK/+~Bqs"4%P@Iy Z5b cU7ЌcKL o5\p|sٶܰq/%?q6S,`nxE^0/z{iweؖW Z14ɯ^^MFeכY<&EoO/~ TLdkd'ǦJ#9SOSXpqr[feVW U'O+{aD㸜"%pKg k) x- 31 M_Y+yb 'Q^` ʌ| $2)FKGZ"B$c z*ٜ!Y#zqMۋ^SӈH=FqқW?WW4$>`5>oZV}!r?rlՙJqru!HjѦzM,x8Ip )*G@ދ'6T6ІZۀMm@x NߩU7ּ,HzFR5 ;>bĀuxO]۟6#K{Z\aL|с+K{UdV#_1(%XQx)Uٓ1y *L[V@!V;= i_TY$5qvnGHP Rdo}v/ŝ,R>o>;gKk|9.ČHDv'_wZր Pцw@tsWB#00f#iHYKҶ6֚!Z1|cY@X#þ#0J>?j`A Ѽ}0.;S=Xn^Fŕ\rAE;=m5ܛ;PhіPH1M vA~^0DRI?-Ld:Es&Rw0Heqx8qoj!vY3ՏUJ202;PY _1`0;+\QxQf-60^WzCi[uEKA642)yz8IY-M-/`4e;pASͦdCvHՀ )u@f]m =XQBB+Gp9WԍߕQ p]JM\g9T0D{̫^a[P~AA ^n )<Ԡ@;iw}I}xIi˖ht) &d5TqhZh}3,ԙ튜_ʼWyKgdšj]1 ۫g|Ibix~}4d9`oS[. p*ţ`ĺ\]q3"RY7;'feqoH7_}J ~^;ֶʪpj$qnWܗ*tNEw{4>>в=_-{5- dCٶzA4rݢPA-f/ qR}.rExhV#@KWF6>@C_(%u9w\u@6:-ƃ+wpfhF׋g&ay;F\'e@IAe0=[..<iɕHڥ!ac% #1r(aw?xju+R+Λnz>`cT:'q`s2`1NU!4^nkxsAj!ǹamx_46zPk )5AD*az66@MA௮u,,Njyߦ\2*]̝+2cV(- d-(|gGfN [l-u.4ݛla? K?E;ũdjO(z+/'I~(?TX4̃'}U7ѽO?Nh \o>|xOqK6G^Yi3"4QKzB'l(AX8eK&HKyY<_lXڳrʼsK-ϔ. Os.R M# $Z#cd#!ò/| WPZd;08̈ns㹹^ց_'UC]Z0"QJvj̜P6 >`3nWp3RCRq"i$ʓI-

,<\=RJ @2ǓQW~J\ݗۯdyż~ulLumԣlVrvҋQr| B-9INhz Gii %)rrVZPީ&X F_#MkK2N5u*tNGSIeтk)8 tQ5;ꬉ">gMmRU#F< +YaS9RiA֤yDJgZ% nmUqUt4DR̘8hv9ᆰTSK3A]I*d.%d, :L\w~nuVoܡiI!-UGx`i{7Q8ᴽ(x]%JSO8殟z+!IՒ ҕ.1+RSEAwQ/_co]Ft"Eioݲ%QKUִZ%~R՟TXmHz:qG$Sa%Bjp Ib8ϯ4D\,;C]|1PܴhD뗴hٸh|:nC'V^W=bx_^~ߓ(̊W_W!~c>r I~u3wBc~;zwN~VNa#;8u8&_' TH)zdIߌCɀ|Vbu0(R+L_yKm4WVoHɣ&mES(_ZTTӦ:HR[`p8pVZ]jAKwXp4fo?:pkq,WhO蜉 )n$k,hXrt,R=wTFiPbCbK|`e77iǍ`de+sGx`I Bb%y@T0V+?-QhXEm*sVK l+XK5Rnx\8KualU&7@4|ѶE[ \iҏEdEKv;5Lu%1Vd~Drruek %է* I))8 r@}#@ 9N8n( TȓH؀;t#c1 `LíD^y$,dIKL3RC,r-i{&3X"LNeN3,QMtC݅j%;]`QlIc?:@z|qDm=:hPyUNl*trg5g8fuJ*qfo 65#ɸ |_|&9/#L"e5/N=)fkZ .UHRuҡ(Dֺ<(&`t~ |{B3Rp餯țޮR>#># /FPbHQ{֩zIdJ i#R+f巹vʒTeMY)zY4-\4aZLF2{])O>.eG-#^1acXq҃Z+S)[{+},J3G\˧ q:E|M$":߿vMnC5c r!Jǀ49^ah{]i1%3|Ѹ;:f<M4QQIǖk \4"[Z 8͜/*_U<|yȗitA$:d)J\,YdԠȂ "dB=Riԫ8Mge }ufCszϪz=L%"@6b2esF2EIiXF xG澃o##E>^ A`n_M z' ¯j/#ZșXl#~ >qP119FQqfCS 6tUDBwViDZn+|*c,]oN56@m],є|6O%k@m=JXi7VЖP)¤\\cvkĠŘ @UUhp2l&ZO G_bSѝl|Ka`n_4쳹b569M6OKRɇTR s%s$0zk\{fvlm̪i 9gTy"{B 2PN<_mSF/|yG.XaC8~%{C=B>r ״~5s=fx+QDuP9dfU5묪YgU:[T}Ί"I!B!4_,_$!胯\ Bd|.9Ϲ[Df('fMp~{wϿ#Rpg~+ƺ WbXYި >_ktloE5%s4v"k;bo>aȬӛ`E9h}SzKh_:i٧{|ws{7οr}{sy+5 J{%b6GE6nch ן )Q9ϊtNыlE o7/ޞO7wݵ8-)o^%A=fOOHrᓿn_JT%m w$ٚ1^' U߲0qKX\~+QBSNTfGio'=UL!}IvzdE%BOFwuН,;ꛜ퉲M+C*̀e4 qj9aIDN qw# (w_N*1Ƹ rJI:H@X50NfўDAc@=D…={BZb! >jʘF($%*NgYA4)[}!+/cHJC}+9A(1 kZeH95H|VScu!B!l6M\(+GR$`DƱhoK6Q1B f&2; يDĞhZcQ_({\ه} z`XQjWtkU&R"*e^Euoֽxr^Ew ]i.DYUDɷL-NɉǂDӺWQ9-r"&LU0j֬mzȢ^B-FHOV2MuMRՏT+|1"ظĻZa,D )LộO|{qM Www?wL SdmdTJ_{P!W쀲~39mNJ0⽉oU̼1A$d2?br,aH2Qx{(O ʒY+ W)hX4Ծ(shB >!O {]VҸ /ˆok|$2i=^q2k,=U q; n&PN)f\A7!VFB[L3-J$VhEi#*]<aAUVw#t>@'lH?к- fC+hN:CuNÜ'ޠ;զD#&ƮrV ۦwp˜ZrH+F ZqM d}0 ((e 6ϹT]H3^h ֪TK]YW<]`S&9AtϭZXGe`gV/S,FQdc/ Q; X$iE,-dړUAZkAlkTdR%Tc(eW^?t@|P!jd e1Ο -DZ0rUؑtD(&ZWXp$ZbLF f֫jaB=WVP=QRك7Bd+,GJ.\: t*dPt:]Z.Lg5OR!Xmt6>};_4-EH1`{R+gO^v;7̈,"O'?dzq~yћ]zf뺖>{aiF=盱 R7߽}&e|;J~~n.^?M.U 2W_ 2~k' LKyrK' elcrqLcBVf+n7opN4[ӂT]+剂'22eblDfR6^@"Ɣ dR0&6=,u6]sTIxT/ Kx,.6ⲥ mlm/[Fğ^~HҮ[xƮN1خbɑ%+[EǞ \Ϣ,@oj9s55H93< }=ű=KE/^f82,(m 45BvPm4ZF[@[-y5.\(?+qoCUf?GpTj:8ZC&='uA+,ovfq-i!Oܦ3=ewEVLk{Oá<-s{$Noji P o2 t8y8R0QZ:MBvצ7NU5֚픜 a)7\22-!%Gu㏿YkҊS'N ;urgZ *<*gK&ܔϏ>0x=˻)WS8i`{SA/õ7+fɗvVFZV}:ɈZsYjO@ )?Md|3[ӕKQu9u%$#Nӏa]`a7$ؽ呭<6 n 2 뗭T7%Ԋ;5S1j筗-Լ6Ӯ+`MJۼc`Y09 ^{R# w߷ޢ!s˅WCþu61pwI"mnQqכz}mn*}ݭt'9Y)^.~20k8fQn`cO7]v\wb3|39,on&\ǺiLF?\8=7➟AQ'0OZ2.Q6DeV+Q6]4#q:%d\1g=tUd 갤h/%¸ım]3q8co]oXci%9lvnN7/_2_n?N4pTu!Ug<b};]̼Zb6).e\xGt Q\xgZح.&;_FPO9i[^hz>^裫2Q=.ԩb AD3~U,A E@BVR,Q!G>dS7_aI5SgG_\Tѡ,>4CMO8S1a7sh;|?Wf9]OofyGJ^ j>|7{s[UM>~f0ԂD\n]leCX`ڥ("6).j⩫50\iya[Vg so0V ޿y*h:ھ{JyΖAۡ8 AFv&pwDHQFJ€wF9Do= mH$sO라`]cdžFޣӟ=~w79PiGOw9<҇ LI(ۯ%qX4wG!OF@[.gIz&wz[)78]Ƶ)Rv{2t;cmChyf<u'رNĘd=`|x!T#1G\ t 7n4$ZQ-zŊTCX2 U *S|֏m=o3:0BUs!GQ\ƕ KߌtgXW1=-ƕK/c@s1 [C\5!`0ֱ8_BaFH]|-BT!{(g?>}nњoer7C?zn9+Kۨ°eȐN1ٔ!Z47& +a(k\ѝQE϶O:fs*(gQ(ugseQFIP&Rtϓ:ycdE[ 99(4JFXemX!Q!i#hvsQv12W-SA;~> zj٧"1}nD[g(Rs4v98¡M\qlV!+#Vqh,PZaBaԲ]CΕ/R}N(H^Jv SRQL300!RS]$b " 2Ձ_-Y!h_wn|HSyEhO|=` 촬F=i]Ӥ`.7Â{"2*S,os2tWg<)p!l@ACmE E͏۔nRm TZ*>5FXSd9i'vD3[銑dOC/0)E"ɮ)Y$#Ź@R)3+{yܧVv$T:(MW:8]̐`hC#Y-*(mɞUA38C_=}:{J'_ٳk\s) JQ@g^SP#8:J-PUR\,>lL#Uyy6[ 0I1)sЫ}w?ݬVLwy^aͤ5zʵ[ʧ6|Jl5hVכQmvM7{W3.?.p .3)|C=?l?|o<07ҵHZ >Wsp0"haTHKi|Ek#l ,sjFj_ܾ">e)/27?.{g7_W:];b^m?Ws]=ח[OG McoBF20M9.,%^?{DypI^8# 9a,,CW R"rrh&!"~{kVE jw0tr ¸9-AU,AE kAb2")f5S9Lk%$Hi*]P )~ic="B?O#xy7~Q5r-#?篷Y_%8SѱLxoK,.-KlsG7Yj]~ܜm>Mܣ`F0کeı# LPfmaD)292Y0Iƭ)J#%x^hƮuD{Z!sG]ka"أ QqQ8XL>p{5oŔy#tatHp 1.4Tͭ؁|ZITGWn=)˰?*|1<1X׻v±N1Y] k~.ֺW Wk/ֺ'^uOdĵk/ֺ\uOX뾚`vqd١bbJŔ]7{MXFŵ$kS..VrbM/.fcQv7:Ɋ"s`HFs"iF+U<sRu2kv1#k3&..f^]̢LȋA\]LU5KIk5D7v?۫jq4Uv;WȚ}W6B5E0JmPAY31oG$M+TFB.\DdJW,FCn6hx{ВS{n^-\zɔ^vÈa>Amvw~RrJEoߏZS Q/bjgrMVzQbPGt|h❳޴DMBB.\D=d #Av0'Gn6hx:3i7X-\zTc>CVͲQbPGt|hm9nю5[ p) pez:F/8ekjr"#Sm9c*bPGt|hŁqL1Vi7 5[ p)B[x(wŠDŋ-9UL$n!$EGƒA>n cgA-߳F;xq!_4ú;r"#SX~'#S\{Й+)# Fb&HIvߘcNԥJ)^LbVhLAgVbly]S^ :`>\K-SV 9xl[8VNߓ{Ȕbc fX t&0ǖg z-$Nq)n L.wVJcN>3A A49LВ"9L THcs, b's1 L H's1 m՝>fQwK|̤Lm;:3cN>3=;c&k|t&3" %s1 aŒ96UL3ER&s1 i5>3=L>1Shi|>f*N>c:#1Sc}c3mA13,RNc:Vr|>fF%I>c:q1'sș(W@n^}r͜8qY)=wgLS1ITdq=U9wS eVBWWן,5} /Y]yg5Ͼ%H[ffST2&_r "4[}, @w7ȐNAeS`VMʋin lVȧGo0$K"3eaϜ#x#2*JׯA_!)%lJ}=,+PY.>ϧK@լ~-ΡO[K/5<ͽ~ګ737:+|?{W*9Y)LI+G#arsb%.rX*Ô!e W@a ~7@W,W*?ܻq][oG+_f2ҧU&Ǚu bL /qEjRTn6o 8%6|~NՇոzGr9→%\o:*0ѱƋSrRSX_.yպt.eC/V9!'ygZDӊ$2}rghԂP/Bm XDP+HyIze*ŠpHV0•4'TB!T>}"Ai'!18m@=N8\?!"ڔj!”Kځ䃣bhR 8JzA{ZDAE=eEe2 PdcfR0ƠM$P\$ u>%#e1Y —W> Ghg`8\ho\gm?nnj}0; J߷X}Ti5#; /ïS:{xaJH͸5|l\^V^>/ _~W%3C5hFVrvսtZZ }+IWWB<#:QZ0 *(O BBQ P(Є+rR5tYXt`v..m|nY^\[dtPSC'Xsك?>|OcYv0ˆlt3r7bk8L'  LHTiehzߚ ġQKq^J*lںM@PX@6o n.>"s޾h1`0SG ;vghXXhstǨ#3i5x&!Lf:Qy+ ڜ$Gk;nefa)_Vn/0L`5ܯy"0g1rVrAGS`^rNq뜋Ҳ;!i?3ŭfj з-TyH "l15 h+- azرXcFs 1/8Tl:ZGco: S[,eo$zKU%$چ5AS z/Z!d#"* ,MXn SR`65uӻHʁŃ!b-91㴝^2Q3!Q6*ÂWm=8k mQD ZmvЦ ʳu8tqWLt|K<'7bKqLhtsvX+ Jꝓc=O:zs!> ᨓxn%ќ=0 +UҲe1* [q5*ncr [-?T% _ |fc0B)ew;?5a0}7r7v[?YQSk81!=K~QDqR].|~՞ql/~\#)_yQP;]{[Eڱޣs~W߷[7>ق9no]oD𢊘tDѿ_U~Vi;ܣrr5A{ x}3 Q5a&B I/ pG -w.,h@_dhGu5eP"z:٘oo1esd+:`:vynM7߶foW/bS|1~Gچ~s;O1.P.GbCCV<Ã\}n\\4b-X8XqDM(ĉWqAy),7a`18ЫG0 MA8hKp8= ( sH ԇ8I\9eDSL ⥖A\[콜$Mff5aJ o\q ²Fr&LM@G9ρD.0ʬ'` fڥ́kuQ.)O^r نtG2X3RE9x~>U#2Rr/tpT Ʉ}kI}W}HTŽ[Bv&BH*[鮨"ȫ'UX"FNe!"IS;CRRd v# (hnuJS.ohSZcWǃ|yH?~%: A㩁$+2pG$-(QF8aCU#|2SL8楮\'<?B;lyqUQYt2MSqT $%i!'$&C|!Rd(44%!UTIi\>bA`Wcc&JW\W)T=y(aHBHJzP*Cq.8K9u"YhrȸDF8xHB@c*l"Ҥk6Z6Bn3@F:hEzi>ʑ]%cs&ehi#\0TClzVZj2 _^pY~[i:Qʷ=ehU#:޸claQkʷRfsz#^C W|5v uU_y-XՔ o*xJ5"N|.t9&&$3,D6 ;UM$]#R-oԬZn~#Rmw}>t/b&s"~yꃃ?oi[vocu)|Mwg4 (5cN7mEC';6TGϽB0ɓbŠ|(f44u cS&he}p+UPUr [SptGXsb=7ΥqE%^?0ƉˬƔ6dp0y$=f'S܀"߼Iԝt37꠽-'{ۄ{l)qwɾ%F_H UJRU| /nFɊ0D|$zB3qrlvV.2\fepYV 7 TdhF!xZG/",REt4HH25o F_g]̩77 55nt~hc&Wq:^h<2>pfMe~y@+Z~P-+ͻpR٤u 8.QDjcJ:N') '2M3y]DiiÄ/Oޖ4|UTEU nI#V9HTzM΅Xt:,3X>^seuHrHSR7jnB/' V8'#MHA=')_iJRPe's+2V+~kZX`pkZj,V+1N_fkzkZ'܄ÑiLqCNĽ=%`x5m=kZ';ۄFvuf\25C!Y", C^Y+xLI蠤Š$%jcIp&GJε :K΍1_-9KRK62ۨl.,ۨRE01ENW$ Y PB,hե߫AQӮNhb`U1̐܋$ $U$Ji+#CL􈁭e;2Ml`+!m_M`7VMF?B'kesk9.\ ac  c6؎fK*YoGJRiȎ}ݽvuE(ؑɷj{:om~J=і#3!Wgw~Z Axa5g??u9w^{gil:%JׇU>pAuc{>VXT0z҅:fSA{ 5vV8jm.DZ5!HxwŃ *m[j ;h2*NNj:R;w(a/\]oW {p _Ç7Hs M@rZr%9õeU}HQ֒LYjn}-X=t!5vYzz Z$5cM+ w ٹKR3T~Ho8nC3ܑ1(B7Mk?M o8~Z"cx;FH0 a|6[0`$o?A䚚7.]~^M1n{7\+wwc!K *U'\:HXh" "u:jV U-=ا ܦFgɧfjg"I?]~Mh)i 26tb=rNA?7EBBR~j"wjYCȲГ~[ֵۋb>'-|aŌ*5Fѽ5FPD@c{ƻ4" :uWI|oHR/Ec )U2ܨ?CYUuA=$6 C:phD67URZY!i,7p5aȒnvYkwAqnؚ=R"Ea7b4 &ݸ#P KE@($PTʥ2,FZd [.I_˽!TG ki(}#,@}sFâwJk{0!~nw5;LqpM@mBlPk a|I,a "˄Y"pSDv ڬ>`|?4`Dl~4Z&!.'M,H.,^E- =?@E4ݝ -05.VrC c,Yɾe\7x.7x]D.Zq=z 0*[&|AZq$<ˤZ2uR ǼX*=_wcB]NBUBnC謨$w.+Mu dtFZJ0c4=3IB9!jg:\*i2M p@s`E-#iv4(G{)5Akbd{Sdqd82u:.2X$ ,N9%SS33¥cCFEA:+c\ԤiG4|kJc+*HK AV)!zoru|`q<>0.(Zq E k_v "S.S d>lM;jdM;o}tBP"hF@ \ j(R)FLfIY0+twu2;vԈM;oUq*7WܸxU)q z3OuxψMܤ)+eJdt2;vԈM;o0)j bӑT㙬5b UXU*B5 $AKB{9+ WhSGHBvъN[ӦNA\0Zc 6CB2dAxc-(4CP<5A|kMHXMȦɦٲ0[*&t5DRQE-VyTaDoBN?W%YIgczZIuXP_!_~e^/TH_]GaURh*ׯVWVGЬ*5SN=k͵Y=kg 22ũT/>5@LƬ! Re>Fi)lCp$9( [B>iSE5H!ܷ!B4z:ϻ'rir / B.qS|`JZ3w F+ǸK_ݕK ?T~!Gnʯ9j=JU_n06nᚖܕ$ ˡhhbôA~R,[VY6E6k{mO{}eiZY5Tб)Zgh\ȫޘqR|5LըZ" U)C߫] B'fs2`2T2+܋Փ"tZe̢ uJ@g<+5xF8uV%bpsSai#u1y.6Dglcdua-)U:#11P0S#K\l,>)Qh/'̂{Sz1͝{Z5Up@=I)K9BfTխ`J LrpjKfʷpYK ;>}v_4qxv^~ZaOܻXaNBLF(B=ESĐ Fg!~("߄ n, _Uuz-Ӛ(S i9՚4XAeѠ#imժymٰL%._阻ɧDeZ:JϘ4V[!$&%} \,و-q8͊ĎRGЋ:c< Dd48iͼX3saD#J-UKڥ*\H#q|wῌnU>& czQĉxժg01?ϧ՟)QXM@uǿcOBhrw=/݋l2_,oj:\|({&W۠KNGHLj#F\kU$^sIL<> b5a8eas :̫m趬M9JӘbR/+!0d24J硊7%e<ԑTDoF0bTv%A- .6W-k7X Pj{ o" 4 9AldfOhdL4JPNh.L^2Q^TPj-!Ak X]f궬M`2^:hLej2UԳ"KZ|y:-M^t-*K\XB u QAF"K6 W-iǩK1ՈT~XfZ9.9گ]|4~ waA#ab~atKz; ܛ@2÷H3oݸȊ]B zqpÀO,D>,: Q;8 ١t|A .Ø6]hثl>YN:i0Е${T7uyh7᪛\"T-t{cSEWClwP X#N^|x sj)9쵬:*S}C^ '}oqˁA;u-i'nܼNw5Gh4lCWG#Ǒ"o,T) l$`DTGnk=9J6t>Gzftrȯvtr6)=D`TB74Y}-BO$m=vTՈ2K7ܮl;UeͦJ Yͧ5u`w[6هo=X s$fw(ˍ8_gG/zӰ.DV/,aV,zh:Y,Onx.$gdD<~6_Fegajoqfȓ4pɭz*ΦAgK\Rb YC֍(h= 0Иq 2K8+*37:3%֟,Ih \͠@'~zurQ/O?,N}=+'g^y՛W<˿_+dUxܾ/{/_㟞x>j@{"~Ͻx=>ӟS٫e<}4]>_~&^4oN'?!OP_nыsSg?o[Ez=ks۶97 qlky$;'g< j(q=wAAe|HN}`jt#-fㇸu@=m(<}q|a۵>4:zIv{cݩS@,?ѱ2jNٿɳ8\x0Fuu,­d0VpO'x<('剙|~wz~+|s__]?C?S-CJPGxF(ĄPlI2+_hɜԛꋫdWn.'Rr

2FE9K`=Ƅ\ui0nؤ)0>̶h>-+F~=&Q$NEOo&SG{y^;N:3o X ? g#G?a\^=6.Cm0P"xo7QƟ@DZ!ŗucʌ6s&'38f~r'=$|HD^ݸ]8V7Ӿ4Psd EӻaUgN<ҀH\ D PjE&k-<]mS߇ohC0~<>ǃ(b~"i~2bX 8[p"#E1'6ˎeA. $4ڰخ \V5VML z*|ju=9t_ t-$%~&cb"DD\:HUDþ]("؝ [|8M`UZg|x˸ P{Pщhq ߤŕ!isBǁu\J1+}nOsQλ|JnOJxU$ډ\:Rf- _ nCllH]l "xi *8C%U%,TX=g^ iOa5n( ]5?bϊO$iloȀ[8džR6;n%lK:Qr^%kCg*e2U,b ]0^a`rpo0~8#"\<3\pf&LE^"pwC$+t3 Z/><7?B[HStlNE5 J6 lRʭkg|cMz'#k:qX5eBD;(>RO)Y{)NPMa8&1aslMP.eploE:p*_]$c2)K5rI.K DD\ڇsAr,K ܲ c]#USVƺ}H9CK*_v'vvɈgRES%Ĵ?fU܂V>$SYjG+*)W&4~]9ӭ*e"yV^ɾ+gr0ֱsJB9[%UgLRbˆaf5hPx1I<  D\ꚗ~ gp.< Ŀ1ɈScA PO;TٌKXY q<𿠎 J*g!1I!b=&}\1b :ocACſwk97fJD^c20.Lb_An8k3xspN'!6ޞ7l AƩl'!O߂5 4{F[u?uad Z(3GY8Q>5ݠ|<;< 03/Q1G&$֍ó[я:,ҰJ>u?)孬.f*dD4H4'Ǩ(<6  zpo:n8tj>4,[NP}jS-HzA9,K\??%/Ffz\Gœ  c>w_4,«\͑aRf,ュ= s ּvp0:eȿipǭ ;|؟[P\nSV*!Z"e-Ў44˂xuB!$(P:>ń"lAc.Ѧ؎x9Ogewi]SoB>?Qec<}V#xxdEˬHTҏPFͅP'V\&X9KͧJg˖yfTg-$SW s޻HtJ2>DdoDNbUJǦwdRm%]$v$NX(nNa%| w6KN$J lbNcLKe ej*.=)ϧm9laů^ҺrN.cGT)zQ4ڎ$NF x=oPtb3v:-wcE(Vral\`+K#NJ|\h%\crɐ#wquLϰ3pKL@Bwdج.0K6ݾݲjI*:CY~\k͙3[ę΍搓_sh]>z}UT`<'ݸu\]FAv7(,߆Ȇ/YxiDXd%"WķY #"hl3ݮL&ϖJ%RIBYU]Yav1󥪴RbQMWHMVR$PA4)T<{x2;KR&J% -HkqRItچtX㺉> ksZ kԨ=B7>Bͅ1_5Qθ4VYUD /k"RW?MH_K I\X BgH!7`^_aZ|"E|ɨtEmHOk u8M}S^=sAH>v7'EɝPc8::^(I/*hYױx1姢mq32Uo6{Ѓ^or0K405 ]j¬уc 96L~O#XjH6',RI>)v7Ǘi"}Q"Hp'쉧z`e\ʧ甩BwX& f$75?A,~{a#ƧmԎVo>iЊNlV,齴ܣб=?2^уIJXQb<,>hZv7W^ bߞ6 3 ǴLD?*CX ][0?>f+fSj#{.q ~ԕ5sh8M906f`wA<6q=P)Gzj \;te"pIQ?Z0z0=毡δ\ çCgLjymtNKM蛣DSWϲNM*x:$)%ݐ寜s~Y˹(I_BD3al3/Ù}eC>ch˙mx䵿ܣ62.06eq6GZ&+z#.1V-iݑlF+B_42|#ScA)_Ǿ$Hba6VE6-P4Q5i,n8cuߟ|IkmW SPD뻲4ԝYUTt.P UI&"~| 6 ^7@TC X;Բ(Ԏ=W$\ 1"{κ#L >fybe=;6Fz֏P!:0Ct`8DoÉϣPK aP,dDBAI$OaD SL!bF! E}C/MGܱ];jCL"ј_˄Sr' dTnMVga;}e3oYWTT$32=&uw]'2ӔcD3ޛʈLda/̜ZW*D5{- K5nF}?Vʘ>!q"A#+"y1 WXlڳX :JxHxZ3xGڸZ hYh3݆:0Lc:P5?f%n+Bw'eOx!)%kG-c=ZjlQg5\fp:(b(ᛄ 3T)sΕ"tJQ),]C k ܻ)S@Qcfy=Aڊub+R}-ɳu$ Gahgv#9oSAdU}̱*G>?>m dXA59EsFE6Vk363Y^ 70m4 j^^񸀠qq&%G*U_1tF1g+sW*faT悵h+R[AyZ\2b5$ ˦5me3g~TETV` f9F g((msKJb?%`CISٶr8@:"2¥//GCKǝ'1*Pd2\@{0)㖆 1Ey*>}r{.gy4ߚh= LYMo;=߰<.#|} epON;;7#,'!jy6|2n`[?'MA%MZB '# n%-a 5ɞ"3%.cKX/ez ,aZ zP}Kc`hi];~#pwZ#7Nw\[zlbG,,e}Jd.&J1XLOǩ;0[{<̰}ıw8Où'ezOʻbpsSdr!@& 2i1&OB#Op:3 ӆ󏷽pz?1¾ Mh`c97kK}jJ{k"еK[,m-$M FQ0|Ql< ]$CJY1Ou Vq{i> 2}+O9'œgaGyRm`f.mqGjkpwa&Y Ι󛬐rK-I 9΃f1Hw}?5yx%x<,T/`w7χ|U9jCyy2Q#/iᚴ %$f=|;'t}s5f'=fh$=I PaF_|?kRt^*/?h!5G!{q$x µꆰb"1[=tV7p"'o YVIpbN87N ( ^Ƃٲ`(nb\ded^Rx05r\T9mٮc+4~2s7&V+Z/Ī th(&k(1EW]ct4Aq|O7a1^sdg0YK-2Y-M&I'+M$QyiLƞؤ Mcރ2+;N/Tqu\ שJG%d:U5:Uͩ,k4T%J.{R9i2KԩzPuNqe\OSMr$*Tґ2ZTOL8SyN$k\,CЙz6Jc|e~v/;$`\{=Kxe eų7>m)씬U4f,c`m :l֎qS(>aԎ3 GǔD~;=wA&'|3wϋAǛ@]x?G毹-H*hBxW,x>Ws/jq0 B\( !:ټN|e Q_2_+ccccX (TXEPK*(R$A%QY*AP( e4%ml|UmR=JwsN$զZN 8d #R$ %0D+4Қ %i "d$ |ҷ&`XAL7DUKuzTs$4*MT͑pƜ˰Q}҉9D9Qx.]$.ﳳ8̽M?R'-lwX3Rzr쿊b?&Q88N_"n%0qRXD费`R'B*@Q! 0bY+ib5ʋP0iLd +Oх]1>R+,W|9^/t`tM[\A5 ޴I}B{ ls.kPkdb{:vwZ.D->+Ozf09iK_Ľ2KW VXFX,rsPа Vcho'+&\16j}#"J ׊*&qW ]~IhS wZqԥuqp*$ד߭_DKgEb. WQL; j鶠;s)j1_hIhugzuŸ+y[?Muӡt8k▜9cKTZ;[RGb<gLq,f4̡4kc_ʴtļ,^*YwYoo1Q>T>C88rSν5btfTzXoWyhGY1CsCلq̧&GC `e*dn];+%#1fMz^Wkb֬5v6]$&~ҨJlo= `­b2LJYhI,5gRR%{.9#/uV Kf2dzj|V}ܺJYcĚdU'*Sjiz2*4+>9Rp^{֪9%5)"k-BsSrzޫO(tm|cV`W;LX7Z.w:֛4XR @*?~^$? }m{URK|co0 t:j\|  T %AM*ΑvSŲe-"I` 9]f2hpg$FTheD&ı`F0k\{i}1Q󛟚 @UZ*u,ʢ7#\uV?xҠQd/yyk_x1Igz.&ӑ*TJI,P|.'%GoI ʸ[I2E4̟?|,2.$˴_Xhi8JCc0jO:M,_sW^L&O$yp,#pLb&]^]DYH3+T$1M\E2X &u]rNK{rT7fq#$_.BUYJU;BiըTwŌr~@LpJX^vZ)O/Un֌jH>/dsr#1εprfn_owP܄97sw97ߡ/UBvOwߖA]kkȝМmQ`SC %[% U a메ٺoWZ$ ATE#[Z>ԪݎiR!dT aU΅k]RP[+#_m.Uh&6qZƕ6KL$զDUi64,*Mv*-hAQۏ%4z]zNtR*1xDI$Ru*%PjcU!FQ%_-`.OV]!Wt J)6ꅀobi=nLbRً$)Â,-ΐ紧v&6j.l+!5:6)xS5md!+ة#}vuCt.\? k%J7 .v=F95燞]M,H@LvYyHR[O=#KRbq?Ҏ U⌢/L2\,Dr`d/%e )Hj]ŤbYۊbҗjҼtQ.֓6mT;x!հR/`!הVȱMl"d1q9#G|p;,$$$!sg>c)R R;0IIA:FRbWTB5FLOOemu{qI`4scAhO N-wgD_}_mybPzc9e#aciZ)Om 4HJ&7›UGP13U`y4~nBGGт>Z05glpƆ!IJi.8V ^$jJ,,5bl8,3n~{z@shL5KQ'۪[*/1 vݘX٠o(SBY#y qUAp^kjU ĬRRk(RND=TaV*e 3gKUB%?u-V 3}NL奎-{2;ѱ0Z9etu!]M&Dg\)QQ0-wj-~v(F&1(Y(I2io_~t:jj[|GC ` ɿ,NsZkvaeh5U){D= HOO&T')(I!6BiL{ ojj KFvWG& f!p @* W g#A| X!xm;-D>bqjڮLl1R8I#ahR%)2zqQ54|l>Q0L#褄|ʸ*̃{\ȻX@Ԭo}GLꐭ~c} 1 0h>kXƍZ㞩`.ջZ&N@j%Z|>'0 &)(DZ#!?.S`f5Pځ)_yv3Mvv~ISUl+.~rv^qr0w}?}Jϗ)ȚsNS5 :͂0DP驴LxvWNH̔o^q r_w`X6xdL!A]&XTztÀ(-V~r3bYnF *3y {Robpઠi9Gɞq,lbRr؟tF9!<[-۷Qtk.A00J aeAk l6ey"U ,Vt\Lds@#AI/e RAw,2¯'3"p}t-$%'}hf,g b$8U81QFѻ;0Dcs?f$M^79Qmj1bk>:?7m/t{vle'W*"@qPhBI5᠈0d .qoxǵaT~G=jǔΗ7fCwFiSޤo:ΥD;ҍSg踽:㉯5LqbԻ&c?O}*Jn|O _> b? u RWDsEw='5[oGP_txzL꫿@Ӕ@gB;o '#P{;D;@K>N}L]էxPV+axDx_>nz&K4[2Wm â,MTN&U1uuI;h%M`)i16<#6A0 0!X!d2ibq:4m),y0*jLB'Zo2ƄZךUI E]w?`ERGt/klߪʲMw\Dpv%s@0u]8]TgK+9!i40 sP43T [ݹND34I<^<]b056^ P\0LgC[L[}KG )Ĕ!“qynH\Vxx7Vd*P&(R& YS,s%VzVX8N=팅d!Z~ZZdjUE[X=Rv Tyz\&Nn P\[e>[P0N\1T$p<`.6X }ǧRUo 8 0)Řю.<IqbWp Op! kyb23Rm\ﮋ99pk;d*PrAelvgbR6q뤐A&GAYKdT&KDy[k,`\7gr?og&cJ資䰸w5%`J{_(\!rMS'0躦'˥ɰbu׼ccޤcjy% k,[N@B7-a>By[u  q[sH,>D +o'"&T6PZPٖk$ 9=0b6@8]Ogmzfmz=]+2гx9mz蹬J.i3_cz2 ]oLqMg LːE"KJ &Q`ȳs,*W. \Y0;`Tf^V"{ O2?9d!FKށm,fƦΟ/vޢVp$Sg}]{u20(8QA8{zb`zr:׋z輖m/0xUhr荖ccXf?,9$U \bTXJmCm( 6چRPjJmCXm( ih %LpIhJs),c-&:K\D6F"HDh#m$Dg\V 6ܛ6jwD"Q60Dzn ;UP;n7v٢LJ%.S\6F`l#m6iUr$Dl#YƳ *(wmI`wIX-RcX 9zd"a[K K$&YV#_dWٹY]$K]$ߺH y}_"[ 0hP!:M=Жt mn(u֨Kća}`+Z+:exPUCf !,lE`[hR(A-${1t'\9PgC+S# %xt70˝ID;D4N\Q0 `qۚ<6D$)'T^EȃRy y22;9,˔)r<𸿌 W챖%c2GAk!Ձk%/(Tu\K>6tg}T QNuXZZH(0&1! q & A'64\M1 iDc]a}!."[tN^CznXF* XP I 06 %@rI)6jhFnkIGH,1cicm{' DD VY?O_M'3\fϐ*qs%I[h!`v0UǙ8P_4xo›' ~d{i[I$<Ds{&^BF6el͇$- 4' 0G3+:2` o랿WKav`0$9R|9]Ѹ`1(ܲGi $I$uEk;案./y0QVqӪ:Z; gK $O^ P' >NF˪y^lTb>`Ys5Ix .mTJo5?3 $P.2;JCDc^hk$bG]hs6’0qƮ]t%#LhCA B%XF`RdfŜ3IDme;qH(|;nFDhZ6kP1J'! yD\Jpd$K 3C.0QQ{Ari2rFVLϾmvdkr2?t2gg\nKI}9/IJU7^SB(+EH9=pL*G`R?SGp79ch"RQvumymlEnk9=-u4#]9#{`|SB-~GB xr<pM*6VS|0h_K=5ԯ!D"Pu4Vn4RndwӉe04Q&1!8Se]vMۍϾf#{O7+ HAV*Srۯ\ۛqD*QݟYd 52mD6|l{EyT p{yNCa|}zيXZJ\Em$ Dc 1("HXMY FL'XLH"89 ج60g w<qxIm&s+SʈhUlGQH]syO3Anvn[quDP!\E 锒w7â֭ NS^xN?Ȕ[a2WVR yoJ=rM9&"DmP"$H=`-%c288*D.M刻lxZ`^g=Ǥ.,x)b2W{Ds"0c|đ68HcGcГB)okI}FؑOgldkO{` IތICt2NE;>B`^ 'V-8δokv#IrU0|`4%1$2 fI`PBLgaL!!Qh Q{IAЁvգ@N=t0 aHpTHb8:o*B<$6)XZ"rMBD8I *EU3" :hFȀq&QI8"H#%X"FD(a2'0 F16suӰf31sNofcSIforRI_$:Z\9ǻxaͱ;bA'o}LSli?QRl&)ݠטD,#Lv8qpߞYS9[4u+df1IKQ+&hIG0E7O~9hX'21bD3I&")ESo,RT* >X)K|Z~b< Jcbtl R^O#5”ؤJ^PӶAr=Q-W3ԬA#f8ӍmdRrPba-)̖2b惛=nלbv p(W6 U[jKJ]r w1@xڵm&SqVvJ[5>_a?{tÞ!5mWn5}KY(]5,rY-(VCh)Ӽ:NI?YݦŕHrT5qfͮ\ۛQ믇DY+5A=Q` HWsBdnG~SMЎ5l=ZE][~pk0U~ ,T'lOΛr߿ \j,5o5GRo1 \JsSZctz '?"TQ \|ÇoTy2#TUNJ1u۴nAGnE1pQnGte@:VNu+CC)Hǹa(1S+ NDzUŢ:V\%h3Wѽu ,:P#aAJ6umܦXSm79'rފ7M9N Ak/y 9eުޡk{Of쀹tGb)qc'(3׵ʟۗJD*=K !bt=e˘%8]\]ʢВW;,fWW.(+eC-WLeP܁ޙ P~$ܢ1{Г}  p٬.> & f)j5U0iPHG0ޭH*)jo%/[.T\J:geʫ+OWrM*AC)Ek lZ7Ek^)v<ʔʬ[q2h~o"XXm)bXـиnXZX)JI`j{Kq=ݞ  aVR|(QQB@tl 3*Q 0\ S!qH*A\" .C US@E\ )*3(V*6,"F%Մ#E$A,At)B$؞zɺ}~2|pSx0e̮4L r۾56I>hyr{3/V)b{j]lc$[:n|ٷ9b3XLۓ_YP݀9•s1_&M#N,ciI޳*'E%kU@B??VK,ˉŨKckf| qi@74_@aλNr0/_ܭ~J{r #`<_uPs咬hd z/k\vΉx[0ބp:`ƝޛM#I2{۽N0қ7} ޸gF7so73^m`@ˋ9n?ڋ7_e;SOX rN[G_ wmg2fz[w$(Hd|3eh~]:u@^|OVh7Klga4!]ϩ:w|W4pP-W?5ZA2^Ǐ?IjPPf~zz֢7I=7݊04?fhf]a"~?fWьn@ʶ?-WDdMx?RgLxF=5ty,JiolN`m.͗XYv^ɭ ۤwa;??ݽSKv "y9~ę_ mKn_? Z~u?dχGxv<17'Aolz;.Q[Y?a#Jd?`_@gK3L_ k8x}>|$6l+q(Q$]YT,!zKr(֋ .UUW)ig%q~+ UBdsRpaRX"{J 6ƹs=QVeDr$:vhL;(ԁ%{Ќ"A(4>h˔tW\*7ee"/P~nfx/G tW/GZ:r%_,`W}0zzIdHz۪m4eܖ~ly Cj2ec\QXXq0XlNEƒ5nqB (CPҶ1D69oHB'ZnHT}q 7j]vYNH87he Y׋ԁgn޳Vj 9y7@s !9,S-'L诘Ò`&!N&69?a8xyCvoXVlFhH-@2r -@2_ooa $7}Ar Լ['+mB┸ŵUHvx ?t!2a 2t۞fo7ɸj-q՛ e|Z  / /~zuX'Ȱ_H>^ӃK8=>?ygF<[wckuXh}̩Zl 50΁vK(N9 G[ .K32GF;#ԗ&}nb/NOfD6Crjrnff]#PhbsrcN.^{N8{N{N{Nr}g K/pSX^j-zBR 7zf^l 6ngWBۂhZԩ>(a}hk#-&۸[+d$QКOʅтӜB-?b3F\ a\vhɤ.*_eQE}^Q4c/*{Qً^TEe//Ra"{Qً:`$(xyEº~P*NNj ;T3C^E.s.{؋^$"HEb/{>"*\<"D-d$ۍ#'=E^\xdrt,ug#waŪA:s"r(qhte)̉]bkE3#wENs>UbUtMd0|h,P. w6jh8qQ?[peT亄f}jXPqO(CL BSlEszP~ϱP26."m-!&j\jC:Zd[lϗȌjju^R@asxDŽݍ0C%bv܌ygEyw9YmV8*3G>Soq Y k`e7jHti/(GW' ߘJ1oDț Y "J~:p h4KO uU Jʻ5 &']+A6OuY4Dlh{:_9Z8:*} ITe$~P+ Q/l효nAL-C{Uc1wBx)qx~(sTN|ԓ?5D⣜y1lg]>?~W痗ip\FƲVsqqqqȪ2:.[TT.ŲX J€e'=gO'@2jt 7AA!vXgNGp &!tM9tBr;*'”`hAdN˰]syܰu:rdvJv Þn:rWTE=Wy7X;0B ][0:sBr@r Μܩ'HgA8|7&;73ZAA(g>lԑ좍z٨o8Q9Q9Qye\~8u٨#JD8kFQVt6+mV+mVn͊*kP=ׇ+8 t:7˰S]J9\12p]n⻾턒öO=][bbYa:^~/Wsy<8\VYp񇳆; }Mt-z#<;GS^ v̠?)Cҟݙ vtn3]  ' ~MX}nݻ{+.UfrAm/g"@CՐ9ӵU^Lwyk`۹)NO.y+Z;u;ct7/.unBz沫/=YQŃӽjQ*EH.d"k4i}_c˳%/o[ xsImUyS,Cx i}QGŋ=(俬g)ֲTc2P˜[BX,ծwh7yA/idg)2a@mcHXfBl Wkf)o{ro>,mXں2Vn ^VShCn]Qmvh˽ 8M/0\9!އLjK5u'hkUHhhkill+Jk=j ):ԺtUFIm"F8+\m]>E)mdƅ`[Jp6pRB)р[g,/40(}g]R%^hJ8>&7о\h9U~:ƽN$#\u=(ᘬs`abHp4_^K!%MDta*RYӍe+鸞;p=q=q=Owz+t#J;tR/*R..D֊ǖߖ{4NN}l-yjc7pƖL*ǖo[> zƖoNcGW7|GRT t7U?j㄁;ɝ] gOUfAx> O[8Sy/;z937"~S’ߞ[w- T@9(.XWUQ7sq)R3SBhi\U.{=)"ә~ݜ.j |Ї{cͭ3taur蓳, uM]z}dN\61B*VW"J.Z[WCČwsG/;4 0GL&8to+:VNJ)[)PI{}= HE?6޹˵=b^~P*}aB"}!v]0HNVYAS&a1),㸼j'l1=hs/q'ziñIyq>rm}ob޴ HKؐIDJrAJJ*]I4|Esޜdq8fjow 뮽F4m ՑGsMVoDr}VU'i*FUػmPx~u|j1u=ojCVA2Mlee޺7ݴ.Thѥk33TBaβ3U^tN]1ix.LW}GC5Kf?(؊s\qaHJj:,tqm9#ҦFd0˦X5oZ{8ؾr^`gւc7MPttWBwoJ"mjR$z/9bBEıYPX6"G;DH>ױ!,amw *t UP*Lc%7i-XԯNqAե UK_鶥[~~e3lMBd_ݾxt@XyE93dELŽkjPʱh,2aeR{Mt@Wao  %'Isn^Deun8u)*>xub^`W'dé|$*Js}[N5qXs;%&΄r&6K8gG3UBF>FiBK?:d9a0rR戙W?xe>侚( Q[K^c@v- ^$|؂FHt (Zޘׯ|R)WEM3Pa%͛0~G2^f~ U1iJ%jU8Zپ*,%6MTÕD1m5FD4Ibq ۪pWVp\Qܮ۵Uw *"SQG(xR.bFfX"řnHLQUCZI%L&Z-lC#8* & Unl~~ްPRO霠&7 >mi%e+_V bvDqUhhg}C8>efCӛi@]{&RTeGt\89[L@cvlqw$^ݍ?zs}Jj-!;(&EF %Qbi1>y?@&GT_PUǗ˳Jc.gZ}K*F$J#DIH3?L³H$DTMRg鮗jksW{R1hEzKQ|տ Yieb.wtxwu]-=wTa,0{`3=Fx;b|O2pN.kERͅ|VTGa(S%Z6XA M1/de$$@IĥLPZBu )eliKXdpsq%ּE5R9`j1uҩS*|#W@a{@'gmL$V~p{-4={]f^fG,jr31G]eioZ*iC5Vz=Tb|]5$ 49F )5N6ڤf=Dž^j|3]nz0?"ٿ 3J:hiQJi &V6?9ZiQh ѾNz- w卸ʦ`q,k'ً3mR\u pjWwn vdrwqsm|vzep^l½B:,ui$:2P@$C3CLDlRHmB?4mC&FQ5nTwc!+=S(vǒ!8MPhse`ZVқ0y2V+T3|ha&<.[vf6fTϧk8үY{MT'b0\1 Pq~>hBe)T2PIb±vXMpFEy :lKCW: ឱu5NDJ FU\~\թO^A;*هںX*!.ܵ?tL,R*3Rx䞠M ߽@z\^ԁW^A6*nKap]r8iԗ,MM x~TV.~}\*y;r4q{d^b}z6E \Ts.jBDWl3!Di7St~s7|di`K`HUSN).$JnFdg㼥*7qk$s0*_@`Ht^׈}2/ ["\Ut{WBHu{nQx\{]d<_:EW2v>aכafa;jt{v7P۔N?t趈'<lMⰤ hm$׻| a&Sj@N[}$ʁb6&Rh;M_ϋ+*_>}FpP0a 0$(4[_qY6JI>I}pdy:.o[?W}u9g#Dxp8FAFؾݕv¨!LhYNsY1l11zg`3V಴\(`R*/qضڂ|וe] #yHm0%cMn"ď=t=t=t=,[t4 iJc њņ KDeR$e"Wq*3(MT]"("O\J2k݆.Ez b%yљ?i|v5nn|:S^m{q0eB@S/"M8L@&cTDB*t"3J$Ede4&:!5˜UuK `ֵ:冫VkqEzK:`W!XcX eJPyv8ndRhB^ Gg{(KC~W+TX"_4< ["<5,yYu sU(&_Lf…{<*_&iYNrEK@m#'3nT?)rͥUi+Y{2衊֩(o;}S ۂꅌ>uKh, p uʼؿXf>|tP-q/voxkq/]uyyW'MvnLN;C/x/;`4LU%AW:>Po.bL Ũ˨.Rmхlzb6_r6YcT Ϝ\BYw%da F$焴s֐rX@ؾm4Q(z=X,/0};\x|%FB 1>BsߴJf Ha,Yzr?LgZI|CG1;?Ԭr1l,*GZr(Z~2[OXmG=qQUYQX~RV,J$ ;!wfNLZLdLY-d.ԡɤ0d.UĔD|?PuV֒Le5AYwj*j&xlcT36hfvT;4ot$ޢuթ*m&!Sޘu'i֡!s}NQPR?㖼XZ-sd#3NLc.caRvI֜Po,1oet%Oٛ3(Cv序F ;4(ݞmdg>&OpZ ty8Uxk0˩.RͰZ fEy[i-LS0|i{OjAȥ|ԴQ#A/rjvBTB=$",#Ko`|윇}HZng?^ ¾Wq ind&ٽOEF2$͔afbMnGI=CwU(N lж21/J: : E/T6i_Q=ᶚhp!,HWEE2ʿ7]cyW{* |bэq]oP(s4+pF7?ɞ\aM\dܫT2l[~Gg3׸ɭhCP ܝ~:Q&z$D җRthng<ƨouJw/a+c3b4(T3b/S-ɂ1|.ᅦbP8~[g~,I&Lq5R":맃JK#LXjٝqQMti⇓M> ZLeuOb c^{q $Od˦B~Mz5b]<,e)h<>i@ uzIRDr<@I stWbj>SJ̶zxJQoޭwĖ4q';"8w:]Ry#\Ds@R(jKܰ3KPflt9f[~var/home/core/zuul-output/logs/kubelet.log0000644000000000000000003746616515133647530017726 0ustar rootrootJan 20 09:18:44 crc systemd[1]: Starting Kubernetes Kubelet... Jan 20 09:18:44 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:45 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 09:18:45 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 20 09:18:45 crc kubenswrapper[4859]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 09:18:45 crc kubenswrapper[4859]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 20 09:18:45 crc kubenswrapper[4859]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 09:18:45 crc kubenswrapper[4859]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 09:18:45 crc kubenswrapper[4859]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 20 09:18:45 crc kubenswrapper[4859]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.390827 4859 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394091 4859 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394111 4859 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394116 4859 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394121 4859 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394125 4859 feature_gate.go:330] unrecognized feature gate: Example Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394129 4859 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394133 4859 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394137 4859 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394141 4859 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394146 4859 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394150 4859 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394155 4859 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394164 4859 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394169 4859 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394173 4859 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394177 4859 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394181 4859 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394185 4859 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394189 4859 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394193 4859 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394196 4859 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394200 4859 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394204 4859 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394208 4859 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394212 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394216 4859 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394220 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394223 4859 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394227 4859 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394231 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394235 4859 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394239 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394256 4859 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394260 4859 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394264 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394268 4859 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394273 4859 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394277 4859 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394280 4859 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394286 4859 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394290 4859 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394295 4859 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394298 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394303 4859 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394309 4859 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394313 4859 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394316 4859 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394320 4859 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394323 4859 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394327 4859 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394332 4859 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394337 4859 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394342 4859 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394346 4859 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394350 4859 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394354 4859 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394358 4859 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394362 4859 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394366 4859 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394370 4859 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394373 4859 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394377 4859 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394380 4859 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394384 4859 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394389 4859 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394392 4859 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394397 4859 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394401 4859 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394405 4859 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394408 4859 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.394412 4859 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394527 4859 flags.go:64] FLAG: --address="0.0.0.0" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394536 4859 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394544 4859 flags.go:64] FLAG: --anonymous-auth="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394550 4859 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394559 4859 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394564 4859 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394570 4859 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394576 4859 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394581 4859 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394586 4859 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394591 4859 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394596 4859 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394602 4859 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394606 4859 flags.go:64] FLAG: --cgroup-root="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394611 4859 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394616 4859 flags.go:64] FLAG: --client-ca-file="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394620 4859 flags.go:64] FLAG: --cloud-config="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394624 4859 flags.go:64] FLAG: --cloud-provider="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394629 4859 flags.go:64] FLAG: --cluster-dns="[]" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394635 4859 flags.go:64] FLAG: --cluster-domain="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394640 4859 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394644 4859 flags.go:64] FLAG: --config-dir="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394649 4859 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394654 4859 flags.go:64] FLAG: --container-log-max-files="5" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394660 4859 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394665 4859 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394670 4859 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394674 4859 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394679 4859 flags.go:64] FLAG: --contention-profiling="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394683 4859 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394688 4859 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394693 4859 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394698 4859 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394703 4859 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394707 4859 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394711 4859 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394717 4859 flags.go:64] FLAG: --enable-load-reader="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394721 4859 flags.go:64] FLAG: --enable-server="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394725 4859 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394733 4859 flags.go:64] FLAG: --event-burst="100" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394737 4859 flags.go:64] FLAG: --event-qps="50" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394741 4859 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394746 4859 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394750 4859 flags.go:64] FLAG: --eviction-hard="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394755 4859 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394759 4859 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394763 4859 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394767 4859 flags.go:64] FLAG: --eviction-soft="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394772 4859 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394776 4859 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394794 4859 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394798 4859 flags.go:64] FLAG: --experimental-mounter-path="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394803 4859 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394806 4859 flags.go:64] FLAG: --fail-swap-on="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394811 4859 flags.go:64] FLAG: --feature-gates="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394816 4859 flags.go:64] FLAG: --file-check-frequency="20s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394821 4859 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394826 4859 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394830 4859 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394834 4859 flags.go:64] FLAG: --healthz-port="10248" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394839 4859 flags.go:64] FLAG: --help="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394843 4859 flags.go:64] FLAG: --hostname-override="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394847 4859 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394851 4859 flags.go:64] FLAG: --http-check-frequency="20s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394856 4859 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394859 4859 flags.go:64] FLAG: --image-credential-provider-config="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394863 4859 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394867 4859 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394874 4859 flags.go:64] FLAG: --image-service-endpoint="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394879 4859 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394883 4859 flags.go:64] FLAG: --kube-api-burst="100" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394887 4859 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394891 4859 flags.go:64] FLAG: --kube-api-qps="50" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394895 4859 flags.go:64] FLAG: --kube-reserved="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394900 4859 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394904 4859 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394908 4859 flags.go:64] FLAG: --kubelet-cgroups="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394912 4859 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394916 4859 flags.go:64] FLAG: --lock-file="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394920 4859 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394924 4859 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394928 4859 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394934 4859 flags.go:64] FLAG: --log-json-split-stream="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394938 4859 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394942 4859 flags.go:64] FLAG: --log-text-split-stream="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394947 4859 flags.go:64] FLAG: --logging-format="text" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394951 4859 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394955 4859 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394959 4859 flags.go:64] FLAG: --manifest-url="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394963 4859 flags.go:64] FLAG: --manifest-url-header="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394969 4859 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394973 4859 flags.go:64] FLAG: --max-open-files="1000000" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394978 4859 flags.go:64] FLAG: --max-pods="110" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394982 4859 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394986 4859 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394990 4859 flags.go:64] FLAG: --memory-manager-policy="None" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394994 4859 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.394998 4859 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395002 4859 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395006 4859 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395016 4859 flags.go:64] FLAG: --node-status-max-images="50" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395020 4859 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395024 4859 flags.go:64] FLAG: --oom-score-adj="-999" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395028 4859 flags.go:64] FLAG: --pod-cidr="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395033 4859 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395039 4859 flags.go:64] FLAG: --pod-manifest-path="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395043 4859 flags.go:64] FLAG: --pod-max-pids="-1" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395047 4859 flags.go:64] FLAG: --pods-per-core="0" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395051 4859 flags.go:64] FLAG: --port="10250" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395056 4859 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395060 4859 flags.go:64] FLAG: --provider-id="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395064 4859 flags.go:64] FLAG: --qos-reserved="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395067 4859 flags.go:64] FLAG: --read-only-port="10255" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395072 4859 flags.go:64] FLAG: --register-node="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395076 4859 flags.go:64] FLAG: --register-schedulable="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395079 4859 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395086 4859 flags.go:64] FLAG: --registry-burst="10" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395090 4859 flags.go:64] FLAG: --registry-qps="5" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395094 4859 flags.go:64] FLAG: --reserved-cpus="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395098 4859 flags.go:64] FLAG: --reserved-memory="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395103 4859 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395107 4859 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395111 4859 flags.go:64] FLAG: --rotate-certificates="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395115 4859 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395119 4859 flags.go:64] FLAG: --runonce="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395123 4859 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395127 4859 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395131 4859 flags.go:64] FLAG: --seccomp-default="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395135 4859 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395139 4859 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395144 4859 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395148 4859 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395153 4859 flags.go:64] FLAG: --storage-driver-password="root" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395158 4859 flags.go:64] FLAG: --storage-driver-secure="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395162 4859 flags.go:64] FLAG: --storage-driver-table="stats" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395166 4859 flags.go:64] FLAG: --storage-driver-user="root" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395171 4859 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395175 4859 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395179 4859 flags.go:64] FLAG: --system-cgroups="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395183 4859 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395195 4859 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395199 4859 flags.go:64] FLAG: --tls-cert-file="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395203 4859 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395208 4859 flags.go:64] FLAG: --tls-min-version="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395212 4859 flags.go:64] FLAG: --tls-private-key-file="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395216 4859 flags.go:64] FLAG: --topology-manager-policy="none" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395220 4859 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395224 4859 flags.go:64] FLAG: --topology-manager-scope="container" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395228 4859 flags.go:64] FLAG: --v="2" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395238 4859 flags.go:64] FLAG: --version="false" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395243 4859 flags.go:64] FLAG: --vmodule="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395248 4859 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395253 4859 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395357 4859 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395362 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395366 4859 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395370 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395374 4859 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395378 4859 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395382 4859 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395385 4859 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395391 4859 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395395 4859 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395399 4859 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395403 4859 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395408 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395412 4859 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395415 4859 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395419 4859 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395423 4859 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395426 4859 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395430 4859 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395434 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395437 4859 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395441 4859 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395444 4859 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395448 4859 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395452 4859 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395455 4859 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395459 4859 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395462 4859 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395466 4859 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395469 4859 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395473 4859 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395476 4859 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395480 4859 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395483 4859 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395487 4859 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395491 4859 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395494 4859 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395497 4859 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395501 4859 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395506 4859 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395510 4859 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395513 4859 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395517 4859 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395521 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395524 4859 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395528 4859 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395532 4859 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395535 4859 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395539 4859 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395542 4859 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395545 4859 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395549 4859 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395552 4859 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395556 4859 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395559 4859 feature_gate.go:330] unrecognized feature gate: Example Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395563 4859 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395566 4859 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395570 4859 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395574 4859 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395580 4859 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395584 4859 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395587 4859 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395591 4859 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395595 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395598 4859 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395603 4859 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395607 4859 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395611 4859 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395614 4859 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395618 4859 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.395621 4859 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.395633 4859 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.404042 4859 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.404080 4859 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404208 4859 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404221 4859 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404231 4859 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404240 4859 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404248 4859 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404256 4859 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404264 4859 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404272 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404283 4859 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404295 4859 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404304 4859 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404314 4859 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404325 4859 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404335 4859 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404347 4859 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404355 4859 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404363 4859 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404372 4859 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404380 4859 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404388 4859 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404395 4859 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404403 4859 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404411 4859 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404419 4859 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404427 4859 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404434 4859 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404442 4859 feature_gate.go:330] unrecognized feature gate: Example Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404449 4859 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404459 4859 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404466 4859 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404474 4859 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404483 4859 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404491 4859 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404499 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404508 4859 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404515 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404523 4859 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404531 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404539 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404547 4859 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404554 4859 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404562 4859 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404570 4859 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404580 4859 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404590 4859 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404599 4859 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404607 4859 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404615 4859 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404623 4859 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404631 4859 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404640 4859 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404648 4859 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404656 4859 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404664 4859 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404673 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404680 4859 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404689 4859 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404697 4859 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404707 4859 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404718 4859 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404727 4859 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404736 4859 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404743 4859 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404752 4859 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404760 4859 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404768 4859 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404775 4859 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404808 4859 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404817 4859 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404826 4859 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.404834 4859 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.404847 4859 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405083 4859 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405095 4859 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405105 4859 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405113 4859 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405121 4859 feature_gate.go:330] unrecognized feature gate: Example Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405128 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405136 4859 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405147 4859 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405158 4859 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405168 4859 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405176 4859 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405185 4859 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405192 4859 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405201 4859 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405210 4859 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405218 4859 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405227 4859 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405235 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405243 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405251 4859 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405260 4859 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405267 4859 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405275 4859 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405283 4859 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405291 4859 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405299 4859 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405307 4859 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405314 4859 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405322 4859 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405330 4859 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405337 4859 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405345 4859 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405353 4859 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405360 4859 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405368 4859 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405376 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405383 4859 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405394 4859 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405403 4859 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405412 4859 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405421 4859 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405429 4859 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405437 4859 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405444 4859 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405452 4859 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405460 4859 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405468 4859 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405476 4859 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405483 4859 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405492 4859 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405507 4859 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405514 4859 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405522 4859 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405530 4859 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405540 4859 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405549 4859 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405559 4859 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405568 4859 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405577 4859 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405585 4859 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405593 4859 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405601 4859 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405608 4859 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405616 4859 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405624 4859 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405632 4859 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405640 4859 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405647 4859 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405655 4859 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405663 4859 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.405671 4859 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.405684 4859 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.405940 4859 server.go:940] "Client rotation is on, will bootstrap in background" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.410322 4859 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.410457 4859 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.411454 4859 server.go:997] "Starting client certificate rotation" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.411493 4859 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.412252 4859 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-27 16:01:53.991487623 +0000 UTC Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.412441 4859 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.422089 4859 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.424835 4859 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.427618 4859 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.441250 4859 log.go:25] "Validated CRI v1 runtime API" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.465401 4859 log.go:25] "Validated CRI v1 image API" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.467846 4859 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.470975 4859 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-20-09-14-35-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.471023 4859 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.497854 4859 manager.go:217] Machine: {Timestamp:2026-01-20 09:18:45.495608656 +0000 UTC m=+0.251624912 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a9c4b411-791a-4e67-b840-f9825626554f BootID:3d814a40-acf4-473d-aa01-76b4cff444d5 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:62:ef:09 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:62:ef:09 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b2:c0:2a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b8:3c:53 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9d:57:ac Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:19:e2:cb Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:6b:ae:32:54:b2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:72:1e:e7:b2:72:8b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.498287 4859 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.498509 4859 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.501582 4859 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.501865 4859 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.501909 4859 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.502172 4859 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.502187 4859 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.502430 4859 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.502475 4859 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.502830 4859 state_mem.go:36] "Initialized new in-memory state store" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.502949 4859 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.504934 4859 kubelet.go:418] "Attempting to sync node with API server" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.504962 4859 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.504994 4859 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.505010 4859 kubelet.go:324] "Adding apiserver pod source" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.505025 4859 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.506980 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.507081 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.507103 4859 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.507477 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.507600 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.507526 4859 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.508666 4859 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509567 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509609 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509625 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509641 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509663 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509678 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509691 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509724 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509741 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509755 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509774 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.509856 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.510125 4859 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.510707 4859 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.510889 4859 server.go:1280] "Started kubelet" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.511104 4859 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.511103 4859 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.512383 4859 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 09:18:45 crc systemd[1]: Started Kubernetes Kubelet. Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.513050 4859 server.go:460] "Adding debug handlers to kubelet server" Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.512717 4859 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.32:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c65d67b00d80b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:18:45.510821899 +0000 UTC m=+0.266838125,LastTimestamp:2026-01-20 09:18:45.510821899 +0000 UTC m=+0.266838125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.516208 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.516272 4859 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.516315 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:42:19.145486447 +0000 UTC Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.516376 4859 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.516385 4859 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.516488 4859 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.516589 4859 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.517366 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="200ms" Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.517448 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.517569 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.517644 4859 factory.go:55] Registering systemd factory Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.517662 4859 factory.go:221] Registration of the systemd container factory successfully Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.517972 4859 factory.go:153] Registering CRI-O factory Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.517987 4859 factory.go:221] Registration of the crio container factory successfully Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.518049 4859 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.518075 4859 factory.go:103] Registering Raw factory Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.518092 4859 manager.go:1196] Started watching for new ooms in manager Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.520215 4859 manager.go:319] Starting recovery of all containers Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.530661 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531030 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531047 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531060 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531072 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531083 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531094 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531107 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531121 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531136 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531149 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531160 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531171 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531184 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531195 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531206 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531220 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531231 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531242 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531253 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531264 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531278 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531291 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531302 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531312 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531324 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531375 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531388 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531399 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531411 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531427 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531437 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531451 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531483 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531494 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531505 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531518 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531530 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531542 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531552 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531564 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531576 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531589 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531601 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531612 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531624 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531636 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531672 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531686 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531705 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531716 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531729 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531746 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531759 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531773 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531803 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531817 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531829 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531843 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531863 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531876 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531888 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531899 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531910 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531924 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531937 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531949 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531961 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531972 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531985 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.531998 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532010 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532025 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532039 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532051 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532063 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532077 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532090 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532103 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532116 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532130 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532142 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532154 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532167 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532179 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532190 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532202 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532215 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532227 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532239 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532250 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532263 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532275 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532287 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532298 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532311 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532324 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532337 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532350 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532362 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532375 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532386 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532399 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532411 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532427 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532442 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532456 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532471 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532487 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532499 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532512 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532526 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532540 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532550 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532562 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532575 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532585 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532597 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532609 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532619 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532630 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532641 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532654 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532666 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532677 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532688 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532702 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532714 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532726 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532738 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532749 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532761 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532773 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532812 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532825 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532836 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532848 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532860 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532894 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532909 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532923 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532935 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532947 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532958 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532971 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532983 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.532996 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533012 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533025 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533038 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533052 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533064 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533078 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533090 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533102 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533116 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533128 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533142 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533157 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533169 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533182 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533195 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533209 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533267 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533282 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533294 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533306 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533318 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533330 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533344 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533356 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533370 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533382 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533395 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533407 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533420 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533436 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533449 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533460 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533474 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533487 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533499 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533512 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533525 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533536 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533548 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533563 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533577 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533588 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.533601 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534201 4859 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534225 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534251 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534263 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534275 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534286 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534296 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534306 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534318 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534329 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534339 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534352 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534363 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534374 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534385 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534396 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534406 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534458 4859 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534471 4859 reconstruct.go:97] "Volume reconstruction finished" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.534480 4859 reconciler.go:26] "Reconciler: start to sync state" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.544211 4859 manager.go:324] Recovery completed Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.552094 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.553151 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.553181 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.553190 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.553932 4859 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.553946 4859 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.553961 4859 state_mem.go:36] "Initialized new in-memory state store" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.565474 4859 policy_none.go:49] "None policy: Start" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.567272 4859 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.567344 4859 state_mem.go:35] "Initializing new in-memory state store" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.570297 4859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.572302 4859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.572340 4859 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.572366 4859 kubelet.go:2335] "Starting kubelet main sync loop" Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.572415 4859 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 09:18:45 crc kubenswrapper[4859]: W0120 09:18:45.574654 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.574735 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.616778 4859 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.630741 4859 manager.go:334] "Starting Device Plugin manager" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.630823 4859 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.630838 4859 server.go:79] "Starting device plugin registration server" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.631380 4859 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.631434 4859 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.631751 4859 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.632845 4859 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.632874 4859 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.642552 4859 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.673274 4859 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.673401 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.674531 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.674576 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.674588 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.674741 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.674930 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.674979 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.675624 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.675654 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.675667 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.675806 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.675915 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.675939 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.675978 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.675970 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.676056 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.676609 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.676642 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.676659 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.676811 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.676873 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.676902 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.676920 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.676938 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.676975 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.677598 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.677626 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.677635 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.677748 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.677747 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.677854 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.677868 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.677886 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.677892 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.678840 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.678843 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.678866 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.678924 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.678936 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.678945 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.679216 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.679252 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.680101 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.680126 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.680134 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.718256 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="400ms" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.731541 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.733358 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.733381 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.733389 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.733402 4859 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.733868 4859 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.32:6443: connect: connection refused" node="crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.736544 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.736630 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.736672 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.736763 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.736846 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.736875 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.737012 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.737041 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.737101 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.737177 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.737217 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.737286 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.737354 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.737395 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.737427 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.838806 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.838876 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.838914 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.838936 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.838962 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.838986 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839008 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839031 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839056 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839076 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839074 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839142 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839217 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839150 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839188 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839148 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839097 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839213 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839259 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839270 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839197 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839322 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839369 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839413 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839450 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839471 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839487 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839516 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839492 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.839455 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.934571 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.936401 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.936463 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.936480 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:45 crc kubenswrapper[4859]: I0120 09:18:45.936515 4859 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 09:18:45 crc kubenswrapper[4859]: E0120 09:18:45.937124 4859 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.32:6443: connect: connection refused" node="crc" Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.026151 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.038715 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.050124 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:46 crc kubenswrapper[4859]: W0120 09:18:46.065302 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-9941fc143d76e783cc60a58e053f9886159f02ad4e725bcea3f4ee1c402ff76a WatchSource:0}: Error finding container 9941fc143d76e783cc60a58e053f9886159f02ad4e725bcea3f4ee1c402ff76a: Status 404 returned error can't find the container with id 9941fc143d76e783cc60a58e053f9886159f02ad4e725bcea3f4ee1c402ff76a Jan 20 09:18:46 crc kubenswrapper[4859]: W0120 09:18:46.065823 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-854a08a9797bbdfaa9d9066be9847321e2164ce9d1f9bceaeb54722ba299257a WatchSource:0}: Error finding container 854a08a9797bbdfaa9d9066be9847321e2164ce9d1f9bceaeb54722ba299257a: Status 404 returned error can't find the container with id 854a08a9797bbdfaa9d9066be9847321e2164ce9d1f9bceaeb54722ba299257a Jan 20 09:18:46 crc kubenswrapper[4859]: W0120 09:18:46.075447 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b2b98791a89528a9f77ee2cc2da2cb5a6a9bb28033415ff51000122444d64d50 WatchSource:0}: Error finding container b2b98791a89528a9f77ee2cc2da2cb5a6a9bb28033415ff51000122444d64d50: Status 404 returned error can't find the container with id b2b98791a89528a9f77ee2cc2da2cb5a6a9bb28033415ff51000122444d64d50 Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.081159 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.089307 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:18:46 crc kubenswrapper[4859]: W0120 09:18:46.105974 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f184a5fcb58ed840c3350ce814a6980f705a42c4d19550257bbc002841a29571 WatchSource:0}: Error finding container f184a5fcb58ed840c3350ce814a6980f705a42c4d19550257bbc002841a29571: Status 404 returned error can't find the container with id f184a5fcb58ed840c3350ce814a6980f705a42c4d19550257bbc002841a29571 Jan 20 09:18:46 crc kubenswrapper[4859]: W0120 09:18:46.106925 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f2718259e141ffbe7701b06295234df3076a9d235f0d12d348c8a1feeb3469d8 WatchSource:0}: Error finding container f2718259e141ffbe7701b06295234df3076a9d235f0d12d348c8a1feeb3469d8: Status 404 returned error can't find the container with id f2718259e141ffbe7701b06295234df3076a9d235f0d12d348c8a1feeb3469d8 Jan 20 09:18:46 crc kubenswrapper[4859]: E0120 09:18:46.119398 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="800ms" Jan 20 09:18:46 crc kubenswrapper[4859]: E0120 09:18:46.211096 4859 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.32:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c65d67b00d80b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:18:45.510821899 +0000 UTC m=+0.266838125,LastTimestamp:2026-01-20 09:18:45.510821899 +0000 UTC m=+0.266838125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.338086 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.339819 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.339883 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.339911 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.339955 4859 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 09:18:46 crc kubenswrapper[4859]: E0120 09:18:46.340620 4859 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.32:6443: connect: connection refused" node="crc" Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.511758 4859 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.516794 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:42:51.590392823 +0000 UTC Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.577407 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"854a08a9797bbdfaa9d9066be9847321e2164ce9d1f9bceaeb54722ba299257a"} Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.578519 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9941fc143d76e783cc60a58e053f9886159f02ad4e725bcea3f4ee1c402ff76a"} Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.579705 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f2718259e141ffbe7701b06295234df3076a9d235f0d12d348c8a1feeb3469d8"} Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.581116 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f184a5fcb58ed840c3350ce814a6980f705a42c4d19550257bbc002841a29571"} Jan 20 09:18:46 crc kubenswrapper[4859]: I0120 09:18:46.582048 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2b98791a89528a9f77ee2cc2da2cb5a6a9bb28033415ff51000122444d64d50"} Jan 20 09:18:46 crc kubenswrapper[4859]: W0120 09:18:46.709558 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:46 crc kubenswrapper[4859]: E0120 09:18:46.709718 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:46 crc kubenswrapper[4859]: W0120 09:18:46.838492 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:46 crc kubenswrapper[4859]: E0120 09:18:46.839163 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:46 crc kubenswrapper[4859]: E0120 09:18:46.920202 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="1.6s" Jan 20 09:18:46 crc kubenswrapper[4859]: W0120 09:18:46.946247 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:46 crc kubenswrapper[4859]: E0120 09:18:46.946337 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:46 crc kubenswrapper[4859]: W0120 09:18:46.991448 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:46 crc kubenswrapper[4859]: E0120 09:18:46.991539 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.141282 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.143048 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.143117 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.143140 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.143179 4859 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 09:18:47 crc kubenswrapper[4859]: E0120 09:18:47.143930 4859 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.32:6443: connect: connection refused" node="crc" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.477919 4859 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 09:18:47 crc kubenswrapper[4859]: E0120 09:18:47.478858 4859 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.512115 4859 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.517148 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:30:45.718639362 +0000 UTC Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.585506 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c"} Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.585680 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a"} Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.585760 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f"} Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.587112 4859 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61" exitCode=0 Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.587197 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61"} Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.587308 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.588373 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.588406 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.588418 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.588487 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.588494 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699"} Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.588575 4859 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699" exitCode=0 Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.590869 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.590998 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.591058 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.591954 4859 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d" exitCode=0 Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.592053 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d"} Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.592113 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.592906 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.593267 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.593345 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.593442 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.593832 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.593874 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.593894 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.594119 4859 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d019ae4f9815cbf4169b04638cee2f36f3e3af96db68bcfa27ead131b6af1c15" exitCode=0 Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.594185 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.594262 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d019ae4f9815cbf4169b04638cee2f36f3e3af96db68bcfa27ead131b6af1c15"} Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.595118 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.595206 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:47 crc kubenswrapper[4859]: I0120 09:18:47.595265 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:48 crc kubenswrapper[4859]: W0120 09:18:48.509768 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:48 crc kubenswrapper[4859]: E0120 09:18:48.509874 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.32:6443: connect: connection refused" logger="UnhandledError" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.511340 4859 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.32:6443: connect: connection refused Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.517938 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:11:21.437459904 +0000 UTC Jan 20 09:18:48 crc kubenswrapper[4859]: E0120 09:18:48.522440 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="3.2s" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.607220 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a73f3b63cd5d5957c366dcf2640e091581126bf17931e7d6c276724d66529d6a"} Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.607331 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.609242 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.609276 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.609288 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.611150 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467"} Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.611244 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.612120 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.612158 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.612170 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.614912 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb"} Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.614947 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7"} Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.616828 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79"} Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.616856 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af"} Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.620066 4859 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016" exitCode=0 Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.620102 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016"} Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.620190 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.621100 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.621131 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.621140 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.744826 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.748142 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.748200 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.748214 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:48 crc kubenswrapper[4859]: I0120 09:18:48.748242 4859 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.518366 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:50:29.16906431 +0000 UTC Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.624674 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376"} Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.624705 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.625509 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.625670 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.625814 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.627396 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce"} Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.627430 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc"} Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.627454 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3"} Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.627578 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.628521 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.628572 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.628585 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.629487 4859 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0" exitCode=0 Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.629603 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.629620 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.629834 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0"} Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.629931 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.630616 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.630637 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.630646 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.630755 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.630814 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.630833 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.631659 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.631695 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:49 crc kubenswrapper[4859]: I0120 09:18:49.631710 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.519285 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:51:30.324717529 +0000 UTC Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.636937 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be"} Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.637042 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.637083 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.637094 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417"} Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.637237 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2"} Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.637280 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be"} Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.637303 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.637325 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.638065 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.638097 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.638098 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.638127 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.638140 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:50 crc kubenswrapper[4859]: I0120 09:18:50.638109 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.322741 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.323071 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.325355 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.325420 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.325441 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.520162 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:58:48.942098098 +0000 UTC Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.631067 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.645378 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b"} Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.645468 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.645555 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.645587 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.646992 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.647066 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.647093 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.647178 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.647209 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.647229 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.647343 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.647387 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.647403 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:51 crc kubenswrapper[4859]: I0120 09:18:51.730122 4859 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 09:18:52 crc kubenswrapper[4859]: I0120 09:18:52.521231 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:50:09.939512875 +0000 UTC Jan 20 09:18:52 crc kubenswrapper[4859]: I0120 09:18:52.647668 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:52 crc kubenswrapper[4859]: I0120 09:18:52.647699 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:52 crc kubenswrapper[4859]: I0120 09:18:52.649446 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:52 crc kubenswrapper[4859]: I0120 09:18:52.649511 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:52 crc kubenswrapper[4859]: I0120 09:18:52.649533 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:52 crc kubenswrapper[4859]: I0120 09:18:52.649566 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:52 crc kubenswrapper[4859]: I0120 09:18:52.649606 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:52 crc kubenswrapper[4859]: I0120 09:18:52.649634 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:53 crc kubenswrapper[4859]: I0120 09:18:53.522382 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:37:49.740712766 +0000 UTC Jan 20 09:18:53 crc kubenswrapper[4859]: I0120 09:18:53.892003 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 20 09:18:53 crc kubenswrapper[4859]: I0120 09:18:53.892301 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:53 crc kubenswrapper[4859]: I0120 09:18:53.894223 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:53 crc kubenswrapper[4859]: I0120 09:18:53.894317 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:53 crc kubenswrapper[4859]: I0120 09:18:53.894343 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.312104 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.312365 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.314260 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.314362 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.314413 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.319739 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.379632 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.380163 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.382467 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.382537 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.382556 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.522699 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:53:57.051377659 +0000 UTC Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.654248 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.655432 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.655468 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:54 crc kubenswrapper[4859]: I0120 09:18:54.655477 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:55 crc kubenswrapper[4859]: I0120 09:18:55.517419 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:55 crc kubenswrapper[4859]: I0120 09:18:55.523202 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:56:06.907077255 +0000 UTC Jan 20 09:18:55 crc kubenswrapper[4859]: E0120 09:18:55.643031 4859 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:18:55 crc kubenswrapper[4859]: I0120 09:18:55.657445 4859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 09:18:55 crc kubenswrapper[4859]: I0120 09:18:55.657549 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:55 crc kubenswrapper[4859]: I0120 09:18:55.659240 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:55 crc kubenswrapper[4859]: I0120 09:18:55.659310 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:55 crc kubenswrapper[4859]: I0120 09:18:55.659333 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:55 crc kubenswrapper[4859]: I0120 09:18:55.677538 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:56 crc kubenswrapper[4859]: I0120 09:18:56.523400 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:53:25.223442666 +0000 UTC Jan 20 09:18:56 crc kubenswrapper[4859]: I0120 09:18:56.660112 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:56 crc kubenswrapper[4859]: I0120 09:18:56.661648 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:56 crc kubenswrapper[4859]: I0120 09:18:56.661723 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:56 crc kubenswrapper[4859]: I0120 09:18:56.661746 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:56 crc kubenswrapper[4859]: I0120 09:18:56.665252 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:18:57 crc kubenswrapper[4859]: I0120 09:18:57.523733 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:29:10.710757157 +0000 UTC Jan 20 09:18:57 crc kubenswrapper[4859]: I0120 09:18:57.662602 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:57 crc kubenswrapper[4859]: I0120 09:18:57.663830 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:57 crc kubenswrapper[4859]: I0120 09:18:57.663867 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:57 crc kubenswrapper[4859]: I0120 09:18:57.663877 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:58 crc kubenswrapper[4859]: I0120 09:18:58.517418 4859 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 09:18:58 crc kubenswrapper[4859]: I0120 09:18:58.517814 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 09:18:58 crc kubenswrapper[4859]: I0120 09:18:58.524587 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:23:58.144001545 +0000 UTC Jan 20 09:18:58 crc kubenswrapper[4859]: W0120 09:18:58.613607 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 09:18:58 crc kubenswrapper[4859]: I0120 09:18:58.613767 4859 trace.go:236] Trace[1739133218]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 09:18:48.612) (total time: 10001ms): Jan 20 09:18:58 crc kubenswrapper[4859]: Trace[1739133218]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:18:58.613) Jan 20 09:18:58 crc kubenswrapper[4859]: Trace[1739133218]: [10.001662956s] [10.001662956s] END Jan 20 09:18:58 crc kubenswrapper[4859]: E0120 09:18:58.613869 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 09:18:58 crc kubenswrapper[4859]: E0120 09:18:58.749039 4859 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 20 09:18:58 crc kubenswrapper[4859]: W0120 09:18:58.878739 4859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 09:18:58 crc kubenswrapper[4859]: I0120 09:18:58.878851 4859 trace.go:236] Trace[2121861400]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 09:18:48.877) (total time: 10001ms): Jan 20 09:18:58 crc kubenswrapper[4859]: Trace[2121861400]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:18:58.878) Jan 20 09:18:58 crc kubenswrapper[4859]: Trace[2121861400]: [10.001335858s] [10.001335858s] END Jan 20 09:18:58 crc kubenswrapper[4859]: E0120 09:18:58.878874 4859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 09:18:59 crc kubenswrapper[4859]: I0120 09:18:59.116319 4859 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 09:18:59 crc kubenswrapper[4859]: I0120 09:18:59.116398 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 09:18:59 crc kubenswrapper[4859]: I0120 09:18:59.371739 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 20 09:18:59 crc kubenswrapper[4859]: I0120 09:18:59.371987 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:18:59 crc kubenswrapper[4859]: I0120 09:18:59.373094 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:18:59 crc kubenswrapper[4859]: I0120 09:18:59.373126 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:18:59 crc kubenswrapper[4859]: I0120 09:18:59.373159 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:18:59 crc kubenswrapper[4859]: I0120 09:18:59.389137 4859 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]log ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]etcd ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/generic-apiserver-start-informers ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/priority-and-fairness-filter ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/start-apiextensions-informers ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/start-apiextensions-controllers ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/crd-informer-synced ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/start-system-namespaces-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 20 09:18:59 crc kubenswrapper[4859]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 20 09:18:59 crc kubenswrapper[4859]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/bootstrap-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/start-kube-aggregator-informers ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/apiservice-registration-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/apiservice-discovery-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]autoregister-completion ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/apiservice-openapi-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 20 09:18:59 crc kubenswrapper[4859]: livez check failed Jan 20 09:18:59 crc kubenswrapper[4859]: I0120 09:18:59.389865 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:18:59 crc kubenswrapper[4859]: I0120 09:18:59.525583 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:47:57.600756208 +0000 UTC Jan 20 09:19:00 crc kubenswrapper[4859]: I0120 09:19:00.526564 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:29:48.779867205 +0000 UTC Jan 20 09:19:01 crc kubenswrapper[4859]: I0120 09:19:01.527022 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:17:49.359104008 +0000 UTC Jan 20 09:19:01 crc kubenswrapper[4859]: I0120 09:19:01.949272 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:19:01 crc kubenswrapper[4859]: I0120 09:19:01.950904 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:01 crc kubenswrapper[4859]: I0120 09:19:01.950954 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:01 crc kubenswrapper[4859]: I0120 09:19:01.950972 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:01 crc kubenswrapper[4859]: I0120 09:19:01.951008 4859 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 09:19:01 crc kubenswrapper[4859]: E0120 09:19:01.955540 4859 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 20 09:19:02 crc kubenswrapper[4859]: I0120 09:19:02.528088 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:34:28.163834314 +0000 UTC Jan 20 09:19:03 crc kubenswrapper[4859]: I0120 09:19:03.529129 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:13:55.473249173 +0000 UTC Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.128572 4859 trace.go:236] Trace[1805957121]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 09:18:52.728) (total time: 11400ms): Jan 20 09:19:04 crc kubenswrapper[4859]: Trace[1805957121]: ---"Objects listed" error: 11400ms (09:19:04.128) Jan 20 09:19:04 crc kubenswrapper[4859]: Trace[1805957121]: [11.400253622s] [11.400253622s] END Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.128626 4859 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 09:19:04 crc kubenswrapper[4859]: E0120 09:19:04.129999 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.131141 4859 trace.go:236] Trace[815036027]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 09:18:50.126) (total time: 14004ms): Jan 20 09:19:04 crc kubenswrapper[4859]: Trace[815036027]: ---"Objects listed" error: 14004ms (09:19:04.131) Jan 20 09:19:04 crc kubenswrapper[4859]: Trace[815036027]: [14.004924983s] [14.004924983s] END Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.131174 4859 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.133515 4859 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.137400 4859 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.161300 4859 csr.go:261] certificate signing request csr-8lrcn is approved, waiting to be issued Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.170229 4859 csr.go:257] certificate signing request csr-8lrcn is issued Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.383220 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.383394 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.384586 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.384635 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.384644 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.391994 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.512966 4859 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60582->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.513032 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60582->192.168.126.11:17697: read: connection reset by peer" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.529651 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:10:01.381826813 +0000 UTC Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.682040 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.682439 4859 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.682500 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.683157 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.683209 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.683221 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:04 crc kubenswrapper[4859]: I0120 09:19:04.985174 4859 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.171681 4859 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-20 09:14:04 +0000 UTC, rotation deadline is 2026-11-19 05:59:08.506809293 +0000 UTC Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.171718 4859 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7268h40m3.335093997s for next certificate rotation Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.221558 4859 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.412309 4859 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 09:19:05 crc kubenswrapper[4859]: W0120 09:19:05.412566 4859 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 20 09:19:05 crc kubenswrapper[4859]: W0120 09:19:05.412612 4859 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.516579 4859 apiserver.go:52] "Watching apiserver" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.520806 4859 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.521553 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-dns/node-resolver-7ms2q","openshift-machine-config-operator/machine-config-daemon-knvgk","openshift-multus/multus-additional-cni-plugins-pg7bd","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-rhpfn","openshift-multus/multus-xqq7l","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.522107 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.522275 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.522426 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.522457 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.522525 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.522599 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.522735 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.522753 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.522872 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.523356 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.523469 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7ms2q" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.524029 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.524491 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.524555 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.524621 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.530403 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.531152 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.531206 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:59:40.9931951 +0000 UTC Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.531341 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.531580 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.531739 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.532066 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.532656 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.534424 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.534440 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.534502 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.534741 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.534766 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.534834 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.534770 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.534944 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.534974 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535184 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535197 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535220 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535184 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535347 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535374 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535489 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535543 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535668 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535708 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.535728 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.536756 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.536969 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.540083 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.540317 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.540649 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.545950 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.556106 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.567300 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.604281 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.618255 4859 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.635403 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642008 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642056 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642083 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642105 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642124 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642147 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642190 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642213 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642233 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642254 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642275 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642296 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642316 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642335 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642359 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642379 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642402 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642424 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642446 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642469 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642490 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642512 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642534 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642554 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642574 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642594 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642613 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642635 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642653 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642669 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642688 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642705 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642722 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642740 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642761 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642797 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642818 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642837 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642858 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642876 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642903 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642940 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642962 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.642995 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643013 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643031 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643051 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643071 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643092 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643112 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643133 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643154 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643252 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643272 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643290 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643311 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643333 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643345 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643356 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643383 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643406 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643428 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643453 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643474 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643496 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643517 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643539 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643560 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643581 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643622 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643760 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643800 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643825 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643879 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643901 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643929 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643953 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643979 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644000 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644021 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644088 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644110 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644131 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644151 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644175 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644200 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644227 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644250 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644276 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644300 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644321 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644352 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644375 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644399 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644422 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644444 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644465 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644487 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644509 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644535 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644558 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644579 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644601 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644625 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644696 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644814 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644847 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644873 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644896 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644918 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644940 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644960 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644983 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645006 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645029 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645051 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645073 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645095 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645119 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645141 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645164 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645188 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645211 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645235 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645259 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645285 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645309 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645334 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645359 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645384 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645409 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645433 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645457 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645482 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645506 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645530 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645553 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645582 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645606 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645635 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645659 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645684 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645708 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645730 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645754 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645798 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645824 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645849 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645873 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645895 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645918 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645942 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645963 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645987 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646010 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646037 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646060 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646081 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646103 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646127 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646155 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646178 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646201 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646225 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646249 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646273 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646296 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646317 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646345 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646371 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646393 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646416 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646438 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646459 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646482 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646503 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646526 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646550 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646574 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646599 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646623 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646648 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646674 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646709 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646734 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646777 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646819 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646849 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646875 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646902 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646927 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646951 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646974 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646997 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647019 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647044 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647111 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-var-lib-cni-multus\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647138 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647166 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-os-release\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647187 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81947dc9-599a-4d35-a9c5-2684294a3afb-cni-binary-copy\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647212 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-netd\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647236 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-ovn-kubernetes\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647261 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dab032ef-85ae-456c-b5ea-750bc1c32483-mcd-auth-proxy-config\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647297 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647324 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647369 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-kubelet\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647393 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-log-socket\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647416 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfpv7\" (UniqueName: \"kubernetes.io/projected/dab032ef-85ae-456c-b5ea-750bc1c32483-kube-api-access-lfpv7\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647443 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647469 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-cni-binary-copy\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647494 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-run-multus-certs\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647516 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-openvswitch\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647544 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647567 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-slash\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647610 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-hostroot\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647636 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647660 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-node-log\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647682 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-script-lib\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647706 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-daemon-config\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647736 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647757 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-systemd-units\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647798 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-bin\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647825 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfe04730-660d-4e59-8b5e-15e94d72990f-ovn-node-metrics-cert\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647854 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647880 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-system-cni-dir\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647902 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-var-lib-kubelet\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647926 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-system-cni-dir\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647999 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dab032ef-85ae-456c-b5ea-750bc1c32483-rootfs\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648042 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648069 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-var-lib-cni-bin\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648093 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-ovn\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648117 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92hv4\" (UniqueName: \"kubernetes.io/projected/cfe04730-660d-4e59-8b5e-15e94d72990f-kube-api-access-92hv4\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648141 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/050ca282-e7f0-494e-a04c-4b74811dccfe-hosts-file\") pod \"node-resolver-7ms2q\" (UID: \"050ca282-e7f0-494e-a04c-4b74811dccfe\") " pod="openshift-dns/node-resolver-7ms2q" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648165 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wvqj\" (UniqueName: \"kubernetes.io/projected/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-kube-api-access-9wvqj\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648189 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-cni-dir\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648266 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-socket-dir-parent\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643675 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.643909 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644184 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644183 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644709 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.644947 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645032 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645103 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645263 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645443 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645509 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645628 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.645884 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646181 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646114 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646586 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.646847 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647241 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647375 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647522 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647689 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.647841 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648076 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648199 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648251 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.648292 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:19:06.14827172 +0000 UTC m=+20.904287896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.649491 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-run-k8s-cni-cncf-io\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.649589 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.649599 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648565 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648772 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648815 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648856 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648895 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.649029 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.649047 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.649249 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.650754 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.651122 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658224 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.651241 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.651469 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.651524 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.651828 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.651972 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.652323 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.652515 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.652707 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.652799 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.653084 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.653225 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.653398 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.653661 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.653947 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.656880 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658021 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658079 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.648301 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658167 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658302 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658392 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658607 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658702 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.649625 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658769 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-cnibin\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658831 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658866 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-run-netns\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658873 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658896 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xbs\" (UniqueName: \"kubernetes.io/projected/81947dc9-599a-4d35-a9c5-2684294a3afb-kube-api-access-45xbs\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658937 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-systemd\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658952 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658958 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-var-lib-openvswitch\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658984 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dab032ef-85ae-456c-b5ea-750bc1c32483-proxy-tls\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.658983 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.659038 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.659065 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-netns\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.659091 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-etc-openvswitch\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.659098 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.659256 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.659504 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.659793 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.659906 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.660065 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.660234 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.660343 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.660363 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.660460 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.660665 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.660735 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.660915 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.660945 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.661071 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.661096 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.661236 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.661351 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.661443 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.661570 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.661640 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.661707 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.661811 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.661993 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.662054 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.662151 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.662238 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.662305 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.662426 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.662520 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.662566 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.662628 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.662851 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.662998 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.663174 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.663197 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.663643 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.663867 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.663882 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.664025 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.664079 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.664256 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.664552 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.664648 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.664929 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.664944 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665367 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.659117 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665457 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665494 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-os-release\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665584 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665609 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-env-overrides\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665636 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-cnibin\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665659 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-config\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665683 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665735 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5g8k\" (UniqueName: \"kubernetes.io/projected/050ca282-e7f0-494e-a04c-4b74811dccfe-kube-api-access-b5g8k\") pod \"node-resolver-7ms2q\" (UID: \"050ca282-e7f0-494e-a04c-4b74811dccfe\") " pod="openshift-dns/node-resolver-7ms2q" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665810 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665837 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-conf-dir\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.665859 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-etc-kubernetes\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666121 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666281 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666646 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666704 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666711 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.666810 4859 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666826 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.666873 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:06.166853371 +0000 UTC m=+20.922869547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666918 4859 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666936 4859 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666951 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666975 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666990 4859 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667002 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667017 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667032 4859 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667046 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667042 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667129 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667175 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.666669 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667405 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667501 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667512 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667636 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667661 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667775 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667923 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667971 4859 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.668029 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.668059 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.668178 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.668333 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.668343 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.668373 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.667060 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.668488 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.669733 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.669908 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.670010 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.670936 4859 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.671011 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:06.17099282 +0000 UTC m=+20.927008996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.671395 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.671761 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.672139 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.672156 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.672467 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.672683 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.672862 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.672883 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.672932 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.672941 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.672996 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.672282 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.673438 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.673463 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.673562 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.673586 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.673600 4859 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.673645 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:06.17363172 +0000 UTC m=+20.929647916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.673729 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.673944 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.674139 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.674501 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.674543 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.674893 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.674965 4859 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.674986 4859 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675003 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675013 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675023 4859 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675075 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675086 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675095 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675105 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675114 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675125 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675136 4859 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675146 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675155 4859 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675164 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675173 4859 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675182 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675191 4859 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675201 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675210 4859 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675219 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675228 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675238 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675248 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675257 4859 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675266 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675276 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675285 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675294 4859 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675303 4859 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675312 4859 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675321 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675331 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675342 4859 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675352 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675361 4859 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675372 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675402 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675413 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675422 4859 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675431 4859 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675439 4859 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675448 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675459 4859 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675470 4859 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675479 4859 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675488 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675525 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675537 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675547 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675549 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675578 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675647 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675724 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.675940 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676220 4859 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676248 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676261 4859 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676272 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676283 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676296 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676307 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676317 4859 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676328 4859 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676339 4859 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676350 4859 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676361 4859 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676371 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676380 4859 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676389 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676398 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676407 4859 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676418 4859 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676427 4859 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676436 4859 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676444 4859 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676454 4859 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676463 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676467 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676472 4859 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676523 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676533 4859 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676545 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676554 4859 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676562 4859 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676570 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676580 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676588 4859 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676597 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676629 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676638 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676647 4859 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676655 4859 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676664 4859 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676672 4859 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676681 4859 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676690 4859 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676700 4859 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676709 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676718 4859 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676727 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676736 4859 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676744 4859 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676753 4859 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676761 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676770 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676792 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676816 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676220 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676470 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676758 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.676903 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.677069 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.677086 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.677077 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.677171 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.677981 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.678004 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.680951 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.681192 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.681545 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.683556 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.683642 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.683866 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.684440 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.684939 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.688156 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.688758 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.688885 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.688793 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.689172 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.689654 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.693536 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.693683 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.693721 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.694117 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.694507 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.695032 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.695277 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.695750 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.695767 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.695799 4859 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.695872 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:06.195855665 +0000 UTC m=+20.951871841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.695959 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.696304 4859 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce" exitCode=255 Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.696353 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce"} Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.699694 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.699987 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.700195 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.699902 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.699888 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: E0120 09:19:05.706289 4859 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.707412 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.707961 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.708769 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.709833 4859 scope.go:117] "RemoveContainer" containerID="667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.716320 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.716713 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.725610 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.726941 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.729735 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.735162 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.744900 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.759076 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.768796 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.776289 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777600 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-conf-dir\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777637 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-etc-kubernetes\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777657 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-var-lib-cni-multus\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777673 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777691 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-os-release\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777707 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81947dc9-599a-4d35-a9c5-2684294a3afb-cni-binary-copy\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777722 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-netd\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777769 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-kubelet\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777810 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-ovn-kubernetes\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777832 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dab032ef-85ae-456c-b5ea-750bc1c32483-mcd-auth-proxy-config\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777906 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-run-multus-certs\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777924 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-etc-kubernetes\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777974 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-openvswitch\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777936 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-openvswitch\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778021 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-netd\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778050 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-log-socket\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778079 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfpv7\" (UniqueName: \"kubernetes.io/projected/dab032ef-85ae-456c-b5ea-750bc1c32483-kube-api-access-lfpv7\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778101 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-cni-binary-copy\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778136 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-slash\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778154 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-hostroot\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778171 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-node-log\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778185 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-script-lib\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778226 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-system-cni-dir\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778238 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778243 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-var-lib-kubelet\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778291 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-daemon-config\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.777924 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-conf-dir\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778317 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-systemd-units\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778323 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-log-socket\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778056 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-kubelet\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778340 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-bin\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778361 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfe04730-660d-4e59-8b5e-15e94d72990f-ovn-node-metrics-cert\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778384 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dab032ef-85ae-456c-b5ea-750bc1c32483-rootfs\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778405 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778426 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-system-cni-dir\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778448 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-cni-dir\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778470 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-var-lib-cni-bin\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778490 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-ovn\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778509 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92hv4\" (UniqueName: \"kubernetes.io/projected/cfe04730-660d-4e59-8b5e-15e94d72990f-kube-api-access-92hv4\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778530 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/050ca282-e7f0-494e-a04c-4b74811dccfe-hosts-file\") pod \"node-resolver-7ms2q\" (UID: \"050ca282-e7f0-494e-a04c-4b74811dccfe\") " pod="openshift-dns/node-resolver-7ms2q" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778550 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wvqj\" (UniqueName: \"kubernetes.io/projected/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-kube-api-access-9wvqj\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778574 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-socket-dir-parent\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778598 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-run-k8s-cni-cncf-io\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778632 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-cnibin\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778657 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778665 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-system-cni-dir\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778677 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dab032ef-85ae-456c-b5ea-750bc1c32483-proxy-tls\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778747 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-os-release\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778773 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-run-netns\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778815 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-node-log\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778823 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xbs\" (UniqueName: \"kubernetes.io/projected/81947dc9-599a-4d35-a9c5-2684294a3afb-kube-api-access-45xbs\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778850 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-slash\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778881 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-hostroot\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778880 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dab032ef-85ae-456c-b5ea-750bc1c32483-mcd-auth-proxy-config\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778891 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-systemd\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778972 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-var-lib-openvswitch\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778997 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-os-release\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779019 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779027 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-cni-dir\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779040 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-netns\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779052 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-socket-dir-parent\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779069 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-netns\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778263 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-var-lib-kubelet\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779073 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-etc-openvswitch\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779124 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-run-netns\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778297 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-var-lib-cni-multus\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779136 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-run-multus-certs\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779184 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-bin\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779186 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-systemd-units\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779144 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-cnibin\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779225 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-config\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779247 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-env-overrides\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779300 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779327 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5g8k\" (UniqueName: \"kubernetes.io/projected/050ca282-e7f0-494e-a04c-4b74811dccfe-kube-api-access-b5g8k\") pod \"node-resolver-7ms2q\" (UID: \"050ca282-e7f0-494e-a04c-4b74811dccfe\") " pod="openshift-dns/node-resolver-7ms2q" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779417 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-cni-binary-copy\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779440 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-var-lib-cni-bin\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779102 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-host-run-k8s-cni-cncf-io\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778626 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81947dc9-599a-4d35-a9c5-2684294a3afb-cni-binary-copy\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.778081 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-ovn-kubernetes\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779822 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-ovn\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779841 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-os-release\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779895 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779913 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-var-lib-openvswitch\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.779925 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81947dc9-599a-4d35-a9c5-2684294a3afb-multus-daemon-config\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.780029 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/050ca282-e7f0-494e-a04c-4b74811dccfe-hosts-file\") pod \"node-resolver-7ms2q\" (UID: \"050ca282-e7f0-494e-a04c-4b74811dccfe\") " pod="openshift-dns/node-resolver-7ms2q" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.780121 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-system-cni-dir\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.780217 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81947dc9-599a-4d35-a9c5-2684294a3afb-cnibin\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.780265 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-cnibin\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.780370 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-systemd\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.780283 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dab032ef-85ae-456c-b5ea-750bc1c32483-rootfs\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.780406 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.780284 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.780967 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-env-overrides\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.781746 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.781838 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-config\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782040 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-etc-openvswitch\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782152 4859 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782182 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782196 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782209 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782224 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782237 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782250 4859 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782261 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782274 4859 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782285 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782298 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782311 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782335 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782348 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782360 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782375 4859 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782388 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782400 4859 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782410 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782423 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782436 4859 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782449 4859 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782461 4859 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782473 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782485 4859 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782497 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782497 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-script-lib\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782509 4859 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782571 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782585 4859 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782618 4859 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782632 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782644 4859 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782657 4859 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782667 4859 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782699 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782712 4859 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782725 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782739 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782751 4859 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782796 4859 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782809 4859 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782822 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782854 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782867 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782878 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782890 4859 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782902 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782943 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782950 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dab032ef-85ae-456c-b5ea-750bc1c32483-proxy-tls\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782955 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.782996 4859 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783009 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783024 4859 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783037 4859 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783048 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783060 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783072 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783084 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783097 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783109 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783120 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783137 4859 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783149 4859 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783160 4859 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783172 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783185 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783198 4859 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783211 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783224 4859 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783243 4859 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783256 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783268 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783280 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783292 4859 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783304 4859 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783315 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783326 4859 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783353 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783364 4859 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783376 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783388 4859 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783399 4859 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.783411 4859 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.785082 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfe04730-660d-4e59-8b5e-15e94d72990f-ovn-node-metrics-cert\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.790947 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.801026 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfpv7\" (UniqueName: \"kubernetes.io/projected/dab032ef-85ae-456c-b5ea-750bc1c32483-kube-api-access-lfpv7\") pod \"machine-config-daemon-knvgk\" (UID: \"dab032ef-85ae-456c-b5ea-750bc1c32483\") " pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.801466 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xbs\" (UniqueName: \"kubernetes.io/projected/81947dc9-599a-4d35-a9c5-2684294a3afb-kube-api-access-45xbs\") pod \"multus-xqq7l\" (UID: \"81947dc9-599a-4d35-a9c5-2684294a3afb\") " pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.803035 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5g8k\" (UniqueName: \"kubernetes.io/projected/050ca282-e7f0-494e-a04c-4b74811dccfe-kube-api-access-b5g8k\") pod \"node-resolver-7ms2q\" (UID: \"050ca282-e7f0-494e-a04c-4b74811dccfe\") " pod="openshift-dns/node-resolver-7ms2q" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.803470 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wvqj\" (UniqueName: \"kubernetes.io/projected/ab750176-1775-4e98-ba5e-3b7bab1f6f2d-kube-api-access-9wvqj\") pod \"multus-additional-cni-plugins-pg7bd\" (UID: \"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\") " pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.803603 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92hv4\" (UniqueName: \"kubernetes.io/projected/cfe04730-660d-4e59-8b5e-15e94d72990f-kube-api-access-92hv4\") pod \"ovnkube-node-rhpfn\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.803743 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.813498 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.822211 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.832381 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.841837 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.848481 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.851952 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.865061 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.865846 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 09:19:05 crc kubenswrapper[4859]: W0120 09:19:05.868492 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-5b8375aebc5ca878e0f6bd03d3ddd12185b3e318d214ac47dde9dc7499a45a1d WatchSource:0}: Error finding container 5b8375aebc5ca878e0f6bd03d3ddd12185b3e318d214ac47dde9dc7499a45a1d: Status 404 returned error can't find the container with id 5b8375aebc5ca878e0f6bd03d3ddd12185b3e318d214ac47dde9dc7499a45a1d Jan 20 09:19:05 crc kubenswrapper[4859]: W0120 09:19:05.874769 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b7afb2f331e6e136d94b5540a9ec7e719a21962e42e28fbf85294e93aac92bf2 WatchSource:0}: Error finding container b7afb2f331e6e136d94b5540a9ec7e719a21962e42e28fbf85294e93aac92bf2: Status 404 returned error can't find the container with id b7afb2f331e6e136d94b5540a9ec7e719a21962e42e28fbf85294e93aac92bf2 Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.875482 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.876413 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.884084 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.884125 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.892872 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: W0120 09:19:05.898052 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d31b5cc895a5d88555d9e43c2c9cfb5073668770b3defca2661c8765416456ce WatchSource:0}: Error finding container d31b5cc895a5d88555d9e43c2c9cfb5073668770b3defca2661c8765416456ce: Status 404 returned error can't find the container with id d31b5cc895a5d88555d9e43c2c9cfb5073668770b3defca2661c8765416456ce Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.901536 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.909319 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.917920 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: W0120 09:19:05.922635 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddab032ef_85ae_456c_b5ea_750bc1c32483.slice/crio-183cc608662fb9b1b92d79577c6236290fe8ed565b02378c114aacbdaef3ef40 WatchSource:0}: Error finding container 183cc608662fb9b1b92d79577c6236290fe8ed565b02378c114aacbdaef3ef40: Status 404 returned error can't find the container with id 183cc608662fb9b1b92d79577c6236290fe8ed565b02378c114aacbdaef3ef40 Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.929384 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.929875 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7ms2q" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.929962 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.936058 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.938718 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xqq7l" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.949736 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.951664 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.962753 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.975013 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.984741 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:05 crc kubenswrapper[4859]: W0120 09:19:05.986892 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod050ca282_e7f0_494e_a04c_4b74811dccfe.slice/crio-2e0779eb2d76577b35b5eb305a55b9a1e3f7d31d9bf662b25e47fc77c5059e5e WatchSource:0}: Error finding container 2e0779eb2d76577b35b5eb305a55b9a1e3f7d31d9bf662b25e47fc77c5059e5e: Status 404 returned error can't find the container with id 2e0779eb2d76577b35b5eb305a55b9a1e3f7d31d9bf662b25e47fc77c5059e5e Jan 20 09:19:05 crc kubenswrapper[4859]: W0120 09:19:05.989854 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab750176_1775_4e98_ba5e_3b7bab1f6f2d.slice/crio-0e1263f49d5973b0915b1ff63ab26f3a29604b33519296118d270878d17ede0e WatchSource:0}: Error finding container 0e1263f49d5973b0915b1ff63ab26f3a29604b33519296118d270878d17ede0e: Status 404 returned error can't find the container with id 0e1263f49d5973b0915b1ff63ab26f3a29604b33519296118d270878d17ede0e Jan 20 09:19:05 crc kubenswrapper[4859]: W0120 09:19:05.993744 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfe04730_660d_4e59_8b5e_15e94d72990f.slice/crio-4203edfc5c6de74ffae807f09750455273a2e4f2f68d3bb1f4e779759a0a9e58 WatchSource:0}: Error finding container 4203edfc5c6de74ffae807f09750455273a2e4f2f68d3bb1f4e779759a0a9e58: Status 404 returned error can't find the container with id 4203edfc5c6de74ffae807f09750455273a2e4f2f68d3bb1f4e779759a0a9e58 Jan 20 09:19:05 crc kubenswrapper[4859]: I0120 09:19:05.996843 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.006282 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.016156 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.024617 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.185878 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.185996 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.186000 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:19:07.185974878 +0000 UTC m=+21.941991054 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.186083 4859 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.186106 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.186128 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:07.186114561 +0000 UTC m=+21.942130737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.186143 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.186246 4859 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.186290 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:07.186282915 +0000 UTC m=+21.942299161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.186365 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.186391 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.186403 4859 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.186459 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:07.18644373 +0000 UTC m=+21.942459906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.239700 4859 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.287451 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.287590 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.287606 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.287616 4859 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:06 crc kubenswrapper[4859]: E0120 09:19:06.287660 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:07.28764839 +0000 UTC m=+22.043664566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.532019 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:33:41.83285042 +0000 UTC Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.705361 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.705555 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d31b5cc895a5d88555d9e43c2c9cfb5073668770b3defca2661c8765416456ce"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.708966 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22" exitCode=0 Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.709041 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.709072 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"4203edfc5c6de74ffae807f09750455273a2e4f2f68d3bb1f4e779759a0a9e58"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.711656 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xqq7l" event={"ID":"81947dc9-599a-4d35-a9c5-2684294a3afb","Type":"ContainerStarted","Data":"c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.711693 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xqq7l" event={"ID":"81947dc9-599a-4d35-a9c5-2684294a3afb","Type":"ContainerStarted","Data":"b4a8f5bb71ac64cbd1694a412ddbaceee27ba2fc412be45a680e50a6a3b537c6"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.713288 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b7afb2f331e6e136d94b5540a9ec7e719a21962e42e28fbf85294e93aac92bf2"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.716532 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.718738 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.718945 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.720287 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerStarted","Data":"e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.720320 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerStarted","Data":"d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.720336 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerStarted","Data":"183cc608662fb9b1b92d79577c6236290fe8ed565b02378c114aacbdaef3ef40"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.721692 4859 generic.go:334] "Generic (PLEG): container finished" podID="ab750176-1775-4e98-ba5e-3b7bab1f6f2d" containerID="c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f" exitCode=0 Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.721758 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" event={"ID":"ab750176-1775-4e98-ba5e-3b7bab1f6f2d","Type":"ContainerDied","Data":"c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.721793 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" event={"ID":"ab750176-1775-4e98-ba5e-3b7bab1f6f2d","Type":"ContainerStarted","Data":"0e1263f49d5973b0915b1ff63ab26f3a29604b33519296118d270878d17ede0e"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.722900 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.723235 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7ms2q" event={"ID":"050ca282-e7f0-494e-a04c-4b74811dccfe","Type":"ContainerStarted","Data":"80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.723258 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7ms2q" event={"ID":"050ca282-e7f0-494e-a04c-4b74811dccfe","Type":"ContainerStarted","Data":"2e0779eb2d76577b35b5eb305a55b9a1e3f7d31d9bf662b25e47fc77c5059e5e"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.726369 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.726408 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.726419 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5b8375aebc5ca878e0f6bd03d3ddd12185b3e318d214ac47dde9dc7499a45a1d"} Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.735567 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.747060 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.758626 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.769438 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.782811 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.832763 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.849070 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.862918 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.876954 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.891143 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.912250 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.928074 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.946865 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.961382 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.983143 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:06 crc kubenswrapper[4859]: I0120 09:19:06.997329 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:06Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.005560 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.023562 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.039326 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.053522 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.077717 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.105407 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.130142 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.169172 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.187030 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.196942 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.197059 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.197094 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.197122 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.197204 4859 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.197261 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:09.197244669 +0000 UTC m=+23.953260845 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.197576 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:19:09.197561877 +0000 UTC m=+23.953578053 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.197370 4859 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.197384 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.197636 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.197649 4859 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.197617 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:09.197610048 +0000 UTC m=+23.953626224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.197699 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:09.19768607 +0000 UTC m=+23.953702246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.285283 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lf9ds"] Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.285713 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.288155 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.288302 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.288167 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.289765 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.298451 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.298603 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.298622 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.298634 4859 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.298696 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:09.298678165 +0000 UTC m=+24.054694341 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.302082 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.330109 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.343756 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.359246 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.370579 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.384604 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.396319 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.398939 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmnmq\" (UniqueName: \"kubernetes.io/projected/a2998e90-0271-4de9-8998-64cf330dafcb-kube-api-access-fmnmq\") pod \"node-ca-lf9ds\" (UID: \"a2998e90-0271-4de9-8998-64cf330dafcb\") " pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.398965 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2998e90-0271-4de9-8998-64cf330dafcb-host\") pod \"node-ca-lf9ds\" (UID: \"a2998e90-0271-4de9-8998-64cf330dafcb\") " pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.398981 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a2998e90-0271-4de9-8998-64cf330dafcb-serviceca\") pod \"node-ca-lf9ds\" (UID: \"a2998e90-0271-4de9-8998-64cf330dafcb\") " pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.427334 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.469349 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.500442 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmnmq\" (UniqueName: \"kubernetes.io/projected/a2998e90-0271-4de9-8998-64cf330dafcb-kube-api-access-fmnmq\") pod \"node-ca-lf9ds\" (UID: \"a2998e90-0271-4de9-8998-64cf330dafcb\") " pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.500681 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2998e90-0271-4de9-8998-64cf330dafcb-host\") pod \"node-ca-lf9ds\" (UID: \"a2998e90-0271-4de9-8998-64cf330dafcb\") " pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.500706 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a2998e90-0271-4de9-8998-64cf330dafcb-serviceca\") pod \"node-ca-lf9ds\" (UID: \"a2998e90-0271-4de9-8998-64cf330dafcb\") " pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.500768 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2998e90-0271-4de9-8998-64cf330dafcb-host\") pod \"node-ca-lf9ds\" (UID: \"a2998e90-0271-4de9-8998-64cf330dafcb\") " pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.502008 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a2998e90-0271-4de9-8998-64cf330dafcb-serviceca\") pod \"node-ca-lf9ds\" (UID: \"a2998e90-0271-4de9-8998-64cf330dafcb\") " pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.508850 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.532735 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:28:41.314759653 +0000 UTC Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.536359 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmnmq\" (UniqueName: \"kubernetes.io/projected/a2998e90-0271-4de9-8998-64cf330dafcb-kube-api-access-fmnmq\") pod \"node-ca-lf9ds\" (UID: \"a2998e90-0271-4de9-8998-64cf330dafcb\") " pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.571653 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.572840 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.572864 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.572932 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.572959 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.573087 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:07 crc kubenswrapper[4859]: E0120 09:19:07.573175 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.577495 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.578448 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.580125 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.581094 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.581999 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.582745 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.583594 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.584443 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.585569 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.586505 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.589128 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.590557 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.591346 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.592069 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.592773 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.593518 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.594426 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.595031 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.596428 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.598030 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.598952 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.600640 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.601301 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.602701 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.603403 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.604248 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.604974 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.605445 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.606036 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.606505 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.606967 4859 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.607061 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.608302 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.608504 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lf9ds" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.609872 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.610606 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.611165 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.613944 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.615448 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.616229 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.617855 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.618846 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.620126 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.621518 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.622531 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.624031 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.624672 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.626169 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.627000 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.628681 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.629627 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.631178 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.631852 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.632656 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.634167 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.634871 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.654410 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.684859 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.744084 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7"} Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.744127 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52"} Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.744138 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee"} Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.744147 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee"} Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.748186 4859 generic.go:334] "Generic (PLEG): container finished" podID="ab750176-1775-4e98-ba5e-3b7bab1f6f2d" containerID="2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae" exitCode=0 Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.748243 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" event={"ID":"ab750176-1775-4e98-ba5e-3b7bab1f6f2d","Type":"ContainerDied","Data":"2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae"} Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.762822 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lf9ds" event={"ID":"a2998e90-0271-4de9-8998-64cf330dafcb","Type":"ContainerStarted","Data":"cf64aecb4908f1e3dd664b9e89e224ef628b1b6dab514b15ba56e22414f53e7e"} Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.769246 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.783362 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.805979 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.846886 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.885352 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.927374 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:07 crc kubenswrapper[4859]: I0120 09:19:07.971056 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:07Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.014347 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.046488 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.088507 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.132402 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.168493 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.210879 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.249065 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.356301 4859 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.358878 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.358922 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.358934 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.359113 4859 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.367479 4859 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.367948 4859 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.369246 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.369290 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.369302 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.369324 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.369336 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:08 crc kubenswrapper[4859]: E0120 09:19:08.394829 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.399257 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.399298 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.399307 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.399323 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.399335 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:08 crc kubenswrapper[4859]: E0120 09:19:08.416028 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.420239 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.420272 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.420283 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.420299 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.420309 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:08 crc kubenswrapper[4859]: E0120 09:19:08.433250 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.437675 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.437720 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.437730 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.437747 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.437760 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:08 crc kubenswrapper[4859]: E0120 09:19:08.454534 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.459459 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.459501 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.459511 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.459525 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.459535 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:08 crc kubenswrapper[4859]: E0120 09:19:08.476099 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: E0120 09:19:08.476241 4859 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.477899 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.477958 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.477978 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.478007 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.478027 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.533728 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:57:58.393354599 +0000 UTC Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.580591 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.580658 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.580677 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.580700 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.580718 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.683426 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.683477 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.683489 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.683508 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.683524 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.768911 4859 generic.go:334] "Generic (PLEG): container finished" podID="ab750176-1775-4e98-ba5e-3b7bab1f6f2d" containerID="9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7" exitCode=0 Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.769010 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" event={"ID":"ab750176-1775-4e98-ba5e-3b7bab1f6f2d","Type":"ContainerDied","Data":"9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7"} Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.770560 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lf9ds" event={"ID":"a2998e90-0271-4de9-8998-64cf330dafcb","Type":"ContainerStarted","Data":"af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5"} Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.775732 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba"} Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.775773 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b"} Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.788092 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.788174 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.788197 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.788227 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.788255 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.793562 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.814397 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.833418 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.847196 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.861953 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.875334 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.889072 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.891862 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.892053 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.892189 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.892351 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.892472 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.902696 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.918132 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.928965 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.943612 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.964522 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.976618 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.993981 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:08Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.994298 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.994314 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.994322 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.994335 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:08 crc kubenswrapper[4859]: I0120 09:19:08.994344 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:08Z","lastTransitionTime":"2026-01-20T09:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.002307 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.017729 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.067350 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.094174 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.096633 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.096678 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.096691 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.096709 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.096722 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:09Z","lastTransitionTime":"2026-01-20T09:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.105822 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.122552 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.136011 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.169457 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.200297 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.200353 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.200371 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.200394 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.200411 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:09Z","lastTransitionTime":"2026-01-20T09:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.214830 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.217765 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.217969 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:19:13.217935543 +0000 UTC m=+27.973951749 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.218041 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.218133 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.218210 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.218265 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.218291 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.218305 4859 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.218375 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:13.218354004 +0000 UTC m=+27.974370180 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.218379 4859 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.218428 4859 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.218459 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:13.218435286 +0000 UTC m=+27.974451672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.218525 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:13.218496217 +0000 UTC m=+27.974512433 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.261352 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.293835 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.302925 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.302986 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.303004 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.303031 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.303053 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:09Z","lastTransitionTime":"2026-01-20T09:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.318992 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.319200 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.319252 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.319274 4859 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.319352 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:13.319328538 +0000 UTC m=+28.075344754 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.337339 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.376552 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.406767 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.406882 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.406906 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.406946 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.406970 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:09Z","lastTransitionTime":"2026-01-20T09:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.407071 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.419580 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.426169 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.435504 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.468398 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.510406 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.510494 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.510508 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.510527 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.510541 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:09Z","lastTransitionTime":"2026-01-20T09:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.522104 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.534066 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:35:49.913241585 +0000 UTC Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.558363 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.573664 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.573737 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.573764 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.573903 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.574074 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.574217 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.592630 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.613302 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.613371 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.613390 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.613416 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.613435 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:09Z","lastTransitionTime":"2026-01-20T09:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.635648 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.674907 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.709649 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.717093 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.717161 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.717176 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.717202 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.717221 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:09Z","lastTransitionTime":"2026-01-20T09:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.748366 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.786501 4859 generic.go:334] "Generic (PLEG): container finished" podID="ab750176-1775-4e98-ba5e-3b7bab1f6f2d" containerID="de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845" exitCode=0 Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.786577 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" event={"ID":"ab750176-1775-4e98-ba5e-3b7bab1f6f2d","Type":"ContainerDied","Data":"de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.789390 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.800110 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: E0120 09:19:09.809547 4859 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.828459 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.829298 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.829367 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.829399 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.829478 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:09Z","lastTransitionTime":"2026-01-20T09:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.854021 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.892743 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.931211 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.953515 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.953573 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.953590 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.953613 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.953632 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:09Z","lastTransitionTime":"2026-01-20T09:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:09 crc kubenswrapper[4859]: I0120 09:19:09.975612 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:09Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.009459 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.057995 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.058044 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.058056 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.058073 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.058087 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:10Z","lastTransitionTime":"2026-01-20T09:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.058598 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.087219 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.138465 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.160020 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.160064 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.160072 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.160086 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.160095 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:10Z","lastTransitionTime":"2026-01-20T09:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.166891 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.207274 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.247856 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.263010 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.263048 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.263056 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.263070 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.263079 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:10Z","lastTransitionTime":"2026-01-20T09:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.292450 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.329641 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.366166 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.366202 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.366211 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.366227 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.366243 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:10Z","lastTransitionTime":"2026-01-20T09:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.372582 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.407560 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.449061 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.469128 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.469180 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.469198 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.469235 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.469252 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:10Z","lastTransitionTime":"2026-01-20T09:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.486076 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.533576 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.534260 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:05:28.920686828 +0000 UTC Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.569157 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.571505 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.571563 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.571582 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.571606 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.571625 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:10Z","lastTransitionTime":"2026-01-20T09:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.604986 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.674924 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.675007 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.675025 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.675048 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.675072 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:10Z","lastTransitionTime":"2026-01-20T09:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.777882 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.777956 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.777976 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.778004 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.778023 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:10Z","lastTransitionTime":"2026-01-20T09:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.798088 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.802577 4859 generic.go:334] "Generic (PLEG): container finished" podID="ab750176-1775-4e98-ba5e-3b7bab1f6f2d" containerID="6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43" exitCode=0 Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.802635 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" event={"ID":"ab750176-1775-4e98-ba5e-3b7bab1f6f2d","Type":"ContainerDied","Data":"6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.825860 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.847438 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.864896 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.881069 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.881134 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.881148 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.881173 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.881191 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:10Z","lastTransitionTime":"2026-01-20T09:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.885221 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.906002 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.926709 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.946162 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.971895 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.983392 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.983441 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.983467 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.983497 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.983519 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:10Z","lastTransitionTime":"2026-01-20T09:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:10 crc kubenswrapper[4859]: I0120 09:19:10.985049 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.029536 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.049723 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.086877 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.086918 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.086930 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.086947 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.086958 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:11Z","lastTransitionTime":"2026-01-20T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.104747 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.133195 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.170448 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.190655 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.190728 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.190753 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.190827 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.190854 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:11Z","lastTransitionTime":"2026-01-20T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.210652 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.294125 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.294193 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.294210 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.294238 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.294258 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:11Z","lastTransitionTime":"2026-01-20T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.397101 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.397166 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.397189 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.397221 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.397245 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:11Z","lastTransitionTime":"2026-01-20T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.499722 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.499776 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.499822 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.499852 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.499869 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:11Z","lastTransitionTime":"2026-01-20T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.534756 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:09:22.965583032 +0000 UTC Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.573424 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.573429 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:11 crc kubenswrapper[4859]: E0120 09:19:11.573590 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.573636 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:11 crc kubenswrapper[4859]: E0120 09:19:11.573695 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:11 crc kubenswrapper[4859]: E0120 09:19:11.573896 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.603509 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.603559 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.603574 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.603596 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.603611 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:11Z","lastTransitionTime":"2026-01-20T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.707037 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.707107 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.707124 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.707151 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.707168 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:11Z","lastTransitionTime":"2026-01-20T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.809761 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.809858 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.809876 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.809901 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.809922 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:11Z","lastTransitionTime":"2026-01-20T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.811554 4859 generic.go:334] "Generic (PLEG): container finished" podID="ab750176-1775-4e98-ba5e-3b7bab1f6f2d" containerID="a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3" exitCode=0 Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.811614 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" event={"ID":"ab750176-1775-4e98-ba5e-3b7bab1f6f2d","Type":"ContainerDied","Data":"a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3"} Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.856993 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.877740 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.904705 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.913865 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.913919 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.913935 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.913957 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.913976 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:11Z","lastTransitionTime":"2026-01-20T09:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.922981 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.941130 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.967541 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:11 crc kubenswrapper[4859]: I0120 09:19:11.993256 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:11Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.011031 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.016289 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.016334 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.016350 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.016368 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.016381 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:12Z","lastTransitionTime":"2026-01-20T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.029419 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.050598 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.067659 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.089664 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.105173 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.120563 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.120639 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.120658 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.120689 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.120708 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:12Z","lastTransitionTime":"2026-01-20T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.175376 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.195388 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.224340 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.224396 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.224406 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.224427 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.224444 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:12Z","lastTransitionTime":"2026-01-20T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.328335 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.328389 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.328408 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.328432 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.328450 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:12Z","lastTransitionTime":"2026-01-20T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.431505 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.431559 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.431574 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.431594 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.431609 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:12Z","lastTransitionTime":"2026-01-20T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.534899 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.534949 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.534965 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.534987 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.534950 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:46:39.513221119 +0000 UTC Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.535004 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:12Z","lastTransitionTime":"2026-01-20T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.637686 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.637714 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.637723 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.637737 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.637747 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:12Z","lastTransitionTime":"2026-01-20T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.739928 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.739992 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.740008 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.740033 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.740050 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:12Z","lastTransitionTime":"2026-01-20T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.822075 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.822385 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.828227 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" event={"ID":"ab750176-1775-4e98-ba5e-3b7bab1f6f2d","Type":"ContainerStarted","Data":"470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.842441 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.842514 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.842539 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.842569 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.842593 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:12Z","lastTransitionTime":"2026-01-20T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.856025 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.857655 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.872740 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.898082 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.920954 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.938827 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.950232 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.950304 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.950327 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.950357 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.950380 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:12Z","lastTransitionTime":"2026-01-20T09:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.958717 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.974905 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:12 crc kubenswrapper[4859]: I0120 09:19:12.988525 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:12Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.013543 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.028137 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.043026 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.052996 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.053053 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.053071 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.053097 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.053115 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:13Z","lastTransitionTime":"2026-01-20T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.065048 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.079937 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.097493 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.127246 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.144064 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.155911 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.156067 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.156101 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.156134 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.156172 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:13Z","lastTransitionTime":"2026-01-20T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.168262 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.199512 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.214845 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.227933 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.243590 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.258263 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.258351 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.258372 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.258398 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.258415 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:13Z","lastTransitionTime":"2026-01-20T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.265168 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.279988 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.284303 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.284437 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.284515 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:19:21.284490575 +0000 UTC m=+36.040506791 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.284576 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.284646 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.284581 4859 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.284726 4859 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.284806 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:21.284758473 +0000 UTC m=+36.040774689 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.284845 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.284856 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:21.284828625 +0000 UTC m=+36.040844811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.284872 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.284892 4859 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.284960 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:21.284944418 +0000 UTC m=+36.040960634 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.299667 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.318970 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.335257 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.353613 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.361218 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.361284 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.361301 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.361379 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.361418 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:13Z","lastTransitionTime":"2026-01-20T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.374551 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.386376 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.386643 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.386692 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.386715 4859 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.386842 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:21.386778544 +0000 UTC m=+36.142794760 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.388933 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.410686 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.464917 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.464967 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.464977 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.464996 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.465008 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:13Z","lastTransitionTime":"2026-01-20T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.535240 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:11:00.544424271 +0000 UTC Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.568595 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.568647 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.568665 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.568697 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.568719 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:13Z","lastTransitionTime":"2026-01-20T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.572636 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.572833 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.572873 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.573124 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.573302 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:13 crc kubenswrapper[4859]: E0120 09:19:13.573519 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.674270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.674344 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.674368 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.674400 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.674430 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:13Z","lastTransitionTime":"2026-01-20T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.777203 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.777317 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.777343 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.777375 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.777396 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:13Z","lastTransitionTime":"2026-01-20T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.832532 4859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.833299 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.866071 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.880754 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.880842 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.880859 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.880884 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.880901 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:13Z","lastTransitionTime":"2026-01-20T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.905820 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.923637 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.950640 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.972940 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.983390 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.983433 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.983452 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.983475 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.983494 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:13Z","lastTransitionTime":"2026-01-20T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:13 crc kubenswrapper[4859]: I0120 09:19:13.994529 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:13Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.015041 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:14Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.038551 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:14Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.056729 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:14Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.074524 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:14Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.086033 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.086075 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.086086 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.086103 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.086114 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:14Z","lastTransitionTime":"2026-01-20T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.098253 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:14Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.117613 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:14Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.137328 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:14Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.152467 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:14Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.178086 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:14Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.187976 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.188021 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.188035 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.188051 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.188061 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:14Z","lastTransitionTime":"2026-01-20T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.195239 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:14Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.289945 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.289980 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.289988 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.290003 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.290013 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:14Z","lastTransitionTime":"2026-01-20T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.393017 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.393052 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.393092 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.393109 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.393118 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:14Z","lastTransitionTime":"2026-01-20T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.495364 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.495403 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.495413 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.495430 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.495440 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:14Z","lastTransitionTime":"2026-01-20T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.535695 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:59:07.900256377 +0000 UTC Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.598936 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.599014 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.599063 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.599085 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.599102 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:14Z","lastTransitionTime":"2026-01-20T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.701593 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.701651 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.701667 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.701691 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.701709 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:14Z","lastTransitionTime":"2026-01-20T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.804348 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.804388 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.804399 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.804415 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.804427 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:14Z","lastTransitionTime":"2026-01-20T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.836517 4859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.906877 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.906914 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.906926 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.906945 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:14 crc kubenswrapper[4859]: I0120 09:19:14.906959 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:14Z","lastTransitionTime":"2026-01-20T09:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.009700 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.009758 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.009774 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.009823 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.009840 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:15Z","lastTransitionTime":"2026-01-20T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.112900 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.112979 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.113001 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.113029 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.113052 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:15Z","lastTransitionTime":"2026-01-20T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.223443 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.223499 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.223511 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.223528 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.223540 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:15Z","lastTransitionTime":"2026-01-20T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.326451 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.326510 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.326528 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.326551 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.326567 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:15Z","lastTransitionTime":"2026-01-20T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.428569 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.428650 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.428673 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.428696 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.428716 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:15Z","lastTransitionTime":"2026-01-20T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.531813 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.531881 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.531898 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.531922 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.531940 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:15Z","lastTransitionTime":"2026-01-20T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.535896 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:46:40.252692293 +0000 UTC Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.573593 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:15 crc kubenswrapper[4859]: E0120 09:19:15.573839 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.573897 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.573898 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:15 crc kubenswrapper[4859]: E0120 09:19:15.574143 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:15 crc kubenswrapper[4859]: E0120 09:19:15.574264 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.589486 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.610309 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.631458 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.634701 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.634754 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.634770 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.634831 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.634854 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:15Z","lastTransitionTime":"2026-01-20T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.661200 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.680965 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.716625 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.735841 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.738012 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.738073 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.738094 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.738125 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.738149 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:15Z","lastTransitionTime":"2026-01-20T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.768629 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.787589 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.809156 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.830208 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.840187 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.840266 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.840280 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.840297 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.840309 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:15Z","lastTransitionTime":"2026-01-20T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.840736 4859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.848622 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.864184 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.880573 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.900563 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.943570 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.943621 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.943639 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.943661 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:15 crc kubenswrapper[4859]: I0120 09:19:15.943679 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:15Z","lastTransitionTime":"2026-01-20T09:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.070651 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.070694 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.070707 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.070722 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.070734 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:16Z","lastTransitionTime":"2026-01-20T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.172992 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.173066 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.173085 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.173118 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.173142 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:16Z","lastTransitionTime":"2026-01-20T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.276005 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.276073 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.276092 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.276118 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.276138 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:16Z","lastTransitionTime":"2026-01-20T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.332157 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.358163 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.378898 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.378968 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.378988 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.379015 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.379035 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:16Z","lastTransitionTime":"2026-01-20T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.379643 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.401019 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.421958 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.435261 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.452460 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.472549 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.482448 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.482506 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.482525 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.482549 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.482566 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:16Z","lastTransitionTime":"2026-01-20T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.496250 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.514324 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.533567 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.536746 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:32:11.016001407 +0000 UTC Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.561013 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.580668 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.585229 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.585280 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.585296 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.585320 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.585337 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:16Z","lastTransitionTime":"2026-01-20T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.611687 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.626772 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.651138 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.688454 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.688518 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.688536 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.688561 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.688580 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:16Z","lastTransitionTime":"2026-01-20T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.795328 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.795406 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.795429 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.795457 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.795479 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:16Z","lastTransitionTime":"2026-01-20T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.847345 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/0.log" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.852676 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298" exitCode=1 Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.852734 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.853906 4859 scope.go:117] "RemoveContainer" containerID="4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.875466 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.896779 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.900083 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.900128 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.900146 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.900168 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.900183 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:16Z","lastTransitionTime":"2026-01-20T09:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.916966 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.935904 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.948338 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.970418 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:16 crc kubenswrapper[4859]: I0120 09:19:16.983269 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:16Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.003722 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.003828 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.003853 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.003883 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.003903 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:17Z","lastTransitionTime":"2026-01-20T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.013515 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"message\\\":\\\"20 09:19:15.308920 6143 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309109 6143 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309319 6143 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:15.309554 6143 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309606 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 09:19:15.309651 6143 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 09:19:15.309666 6143 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 09:19:15.309702 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:15.309741 6143 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 09:19:15.309758 6143 factory.go:656] Stopping watch factory\\\\nI0120 09:19:15.309760 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:15.309772 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 09:19:15.309801 6143 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.035046 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.053922 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.074572 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.094355 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.105947 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.105981 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.106024 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.106044 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.106096 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:17Z","lastTransitionTime":"2026-01-20T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.106370 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.114026 4859 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.119166 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.130264 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.209144 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.209178 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.209186 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.209200 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.209208 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:17Z","lastTransitionTime":"2026-01-20T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.213242 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc"] Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.213733 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.216502 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.216823 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.228641 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.242715 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.256891 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.267986 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.277893 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.280772 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e594a558-b805-4f1f-9cfa-a50d02390b4e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.280870 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e594a558-b805-4f1f-9cfa-a50d02390b4e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.280907 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2m7x\" (UniqueName: \"kubernetes.io/projected/e594a558-b805-4f1f-9cfa-a50d02390b4e-kube-api-access-h2m7x\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.280941 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e594a558-b805-4f1f-9cfa-a50d02390b4e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.290580 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.304118 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.311461 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.311493 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.311501 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.311514 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.311534 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:17Z","lastTransitionTime":"2026-01-20T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.322122 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.334456 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.353119 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.365626 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.382700 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e594a558-b805-4f1f-9cfa-a50d02390b4e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.382765 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e594a558-b805-4f1f-9cfa-a50d02390b4e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.382831 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2m7x\" (UniqueName: \"kubernetes.io/projected/e594a558-b805-4f1f-9cfa-a50d02390b4e-kube-api-access-h2m7x\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.382871 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e594a558-b805-4f1f-9cfa-a50d02390b4e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.383559 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e594a558-b805-4f1f-9cfa-a50d02390b4e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.383668 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e594a558-b805-4f1f-9cfa-a50d02390b4e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.388928 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"message\\\":\\\"20 09:19:15.308920 6143 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309109 6143 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309319 6143 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:15.309554 6143 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309606 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 09:19:15.309651 6143 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 09:19:15.309666 6143 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 09:19:15.309702 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:15.309741 6143 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 09:19:15.309758 6143 factory.go:656] Stopping watch factory\\\\nI0120 09:19:15.309760 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:15.309772 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 09:19:15.309801 6143 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.391880 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e594a558-b805-4f1f-9cfa-a50d02390b4e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.407281 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2m7x\" (UniqueName: \"kubernetes.io/projected/e594a558-b805-4f1f-9cfa-a50d02390b4e-kube-api-access-h2m7x\") pod \"ovnkube-control-plane-749d76644c-hdfrc\" (UID: \"e594a558-b805-4f1f-9cfa-a50d02390b4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.414639 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.414694 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.414712 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.414738 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.414756 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:17Z","lastTransitionTime":"2026-01-20T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.419218 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.434382 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.448291 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.468402 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.517629 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.517695 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.517715 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.517742 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.517760 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:17Z","lastTransitionTime":"2026-01-20T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.532941 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.537303 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:40:44.459133485 +0000 UTC Jan 20 09:19:17 crc kubenswrapper[4859]: W0120 09:19:17.547951 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode594a558_b805_4f1f_9cfa_a50d02390b4e.slice/crio-cd55604372eeca387b22a021b2d6f4162d16a254ff46dff06e2daf98800b59e7 WatchSource:0}: Error finding container cd55604372eeca387b22a021b2d6f4162d16a254ff46dff06e2daf98800b59e7: Status 404 returned error can't find the container with id cd55604372eeca387b22a021b2d6f4162d16a254ff46dff06e2daf98800b59e7 Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.573321 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.573416 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.573451 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:17 crc kubenswrapper[4859]: E0120 09:19:17.573507 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:17 crc kubenswrapper[4859]: E0120 09:19:17.573659 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:17 crc kubenswrapper[4859]: E0120 09:19:17.573823 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.581842 4859 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.620502 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.620717 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.620913 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.621103 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.621261 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:17Z","lastTransitionTime":"2026-01-20T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.725531 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.725974 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.726187 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.726357 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.726539 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:17Z","lastTransitionTime":"2026-01-20T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.828730 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.828815 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.828831 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.828851 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.828865 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:17Z","lastTransitionTime":"2026-01-20T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.857399 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" event={"ID":"e594a558-b805-4f1f-9cfa-a50d02390b4e","Type":"ContainerStarted","Data":"cd55604372eeca387b22a021b2d6f4162d16a254ff46dff06e2daf98800b59e7"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.860295 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/0.log" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.863613 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.863745 4859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.881670 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.900436 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.915680 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.932270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.932303 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.932312 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.932325 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.932334 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:17Z","lastTransitionTime":"2026-01-20T09:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.932298 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.949422 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.971678 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:17 crc kubenswrapper[4859]: I0120 09:19:17.994354 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:17Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.008520 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.027443 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.034244 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.034281 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.034291 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.034305 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.034314 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.044944 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.064627 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.076606 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.098229 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"message\\\":\\\"20 09:19:15.308920 6143 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309109 6143 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309319 6143 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:15.309554 6143 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309606 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 09:19:15.309651 6143 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 09:19:15.309666 6143 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 09:19:15.309702 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:15.309741 6143 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 09:19:15.309758 6143 factory.go:656] Stopping watch factory\\\\nI0120 09:19:15.309760 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:15.309772 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 09:19:15.309801 6143 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.113712 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.124283 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.136348 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.137056 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.137129 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.137148 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.137173 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.137189 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.239560 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.239598 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.239607 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.239623 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.239632 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.345540 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.345583 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.345593 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.345608 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.345618 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.447713 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.447755 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.447769 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.447810 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.447827 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.537733 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:08:51.774058325 +0000 UTC Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.545974 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.546006 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.546017 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.546033 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.546045 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: E0120 09:19:18.557418 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.560880 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.560920 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.560928 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.560945 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.560974 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: E0120 09:19:18.573948 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.577544 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.577585 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.577598 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.577613 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.577624 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: E0120 09:19:18.590116 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.593934 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.593972 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.593988 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.594008 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.594024 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: E0120 09:19:18.614484 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.619379 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.619430 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.619445 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.619465 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.619481 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: E0120 09:19:18.632484 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: E0120 09:19:18.632823 4859 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.634917 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.634970 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.634989 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.635012 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.635031 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.721247 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-tw45n"] Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.722222 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:18 crc kubenswrapper[4859]: E0120 09:19:18.722320 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.737568 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.737601 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.737608 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.737622 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.737632 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.744962 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.759766 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.780065 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"message\\\":\\\"20 09:19:15.308920 6143 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309109 6143 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309319 6143 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:15.309554 6143 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309606 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 09:19:15.309651 6143 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 09:19:15.309666 6143 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 09:19:15.309702 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:15.309741 6143 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 09:19:15.309758 6143 factory.go:656] Stopping watch factory\\\\nI0120 09:19:15.309760 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:15.309772 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 09:19:15.309801 6143 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.795459 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.798006 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bptsm\" (UniqueName: \"kubernetes.io/projected/0c059dec-0bda-4110-9050-7cbba39eb183-kube-api-access-bptsm\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.798163 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.818100 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.840524 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.840569 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.840580 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.840602 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.840614 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.842703 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.862598 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.868959 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" event={"ID":"e594a558-b805-4f1f-9cfa-a50d02390b4e","Type":"ContainerStarted","Data":"0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.869099 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" event={"ID":"e594a558-b805-4f1f-9cfa-a50d02390b4e","Type":"ContainerStarted","Data":"59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.871307 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/1.log" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.871744 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/0.log" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.874892 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e" exitCode=1 Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.874930 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.875035 4859 scope.go:117] "RemoveContainer" containerID="4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.875720 4859 scope.go:117] "RemoveContainer" containerID="f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e" Jan 20 09:19:18 crc kubenswrapper[4859]: E0120 09:19:18.875922 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.881404 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.899111 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bptsm\" (UniqueName: \"kubernetes.io/projected/0c059dec-0bda-4110-9050-7cbba39eb183-kube-api-access-bptsm\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.899220 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:18 crc kubenswrapper[4859]: E0120 09:19:18.899730 4859 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:18 crc kubenswrapper[4859]: E0120 09:19:18.899826 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs podName:0c059dec-0bda-4110-9050-7cbba39eb183 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:19.399804831 +0000 UTC m=+34.155821017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs") pod "network-metrics-daemon-tw45n" (UID: "0c059dec-0bda-4110-9050-7cbba39eb183") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.900260 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.915716 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.920313 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bptsm\" (UniqueName: \"kubernetes.io/projected/0c059dec-0bda-4110-9050-7cbba39eb183-kube-api-access-bptsm\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.926672 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.940404 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.943650 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.943676 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.943686 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.943701 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.943709 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:18Z","lastTransitionTime":"2026-01-20T09:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.953592 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.965807 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:18 crc kubenswrapper[4859]: I0120 09:19:18.982926 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:18Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.006812 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.019730 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.040217 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.045773 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.045826 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.045840 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.045856 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.045868 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:19Z","lastTransitionTime":"2026-01-20T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.052223 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.069340 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.082223 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.095660 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.110984 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.131238 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.149711 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.149776 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.149827 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.149851 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.149868 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:19Z","lastTransitionTime":"2026-01-20T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.150669 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.166111 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.184385 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.198693 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.219031 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.237207 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.252483 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.252539 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.252558 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.252585 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.252603 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:19Z","lastTransitionTime":"2026-01-20T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.257510 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.290145 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.305195 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.337737 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"message\\\":\\\"20 09:19:15.308920 6143 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309109 6143 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309319 6143 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:15.309554 6143 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309606 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 09:19:15.309651 6143 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 09:19:15.309666 6143 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 09:19:15.309702 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:15.309741 6143 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 09:19:15.309758 6143 factory.go:656] Stopping watch factory\\\\nI0120 09:19:15.309760 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:15.309772 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 09:19:15.309801 6143 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"-9519d3c9c3e5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress/router-internal-default_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0120 09:19:18.521433 62\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:19Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.355707 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.355760 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.355775 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.355814 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.355830 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:19Z","lastTransitionTime":"2026-01-20T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.403640 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:19 crc kubenswrapper[4859]: E0120 09:19:19.403837 4859 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:19 crc kubenswrapper[4859]: E0120 09:19:19.403929 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs podName:0c059dec-0bda-4110-9050-7cbba39eb183 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:20.403903611 +0000 UTC m=+35.159919827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs") pod "network-metrics-daemon-tw45n" (UID: "0c059dec-0bda-4110-9050-7cbba39eb183") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.458208 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.458305 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.458326 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.458351 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.458369 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:19Z","lastTransitionTime":"2026-01-20T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.538556 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:46:43.811788657 +0000 UTC Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.560920 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.560971 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.560982 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.560999 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.561026 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:19Z","lastTransitionTime":"2026-01-20T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.573500 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.573530 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.573578 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:19 crc kubenswrapper[4859]: E0120 09:19:19.573676 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:19 crc kubenswrapper[4859]: E0120 09:19:19.573767 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:19 crc kubenswrapper[4859]: E0120 09:19:19.573878 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.663491 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.663553 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.663570 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.663593 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.663613 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:19Z","lastTransitionTime":"2026-01-20T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.766671 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.766766 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.766834 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.766866 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.766884 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:19Z","lastTransitionTime":"2026-01-20T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.870337 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.870393 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.870409 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.870432 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.870448 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:19Z","lastTransitionTime":"2026-01-20T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.881030 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/1.log" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.973979 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.974038 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.974059 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.974083 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:19 crc kubenswrapper[4859]: I0120 09:19:19.974100 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:19Z","lastTransitionTime":"2026-01-20T09:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.083883 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.083977 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.084008 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.084046 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.084066 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:20Z","lastTransitionTime":"2026-01-20T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.187265 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.187349 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.187369 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.187395 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.187413 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:20Z","lastTransitionTime":"2026-01-20T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.304998 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.305372 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.305390 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.305418 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.305437 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:20Z","lastTransitionTime":"2026-01-20T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.409410 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.409498 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.409525 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.409559 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.409584 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:20Z","lastTransitionTime":"2026-01-20T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.414360 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:20 crc kubenswrapper[4859]: E0120 09:19:20.414530 4859 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:20 crc kubenswrapper[4859]: E0120 09:19:20.414603 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs podName:0c059dec-0bda-4110-9050-7cbba39eb183 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:22.414582178 +0000 UTC m=+37.170598354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs") pod "network-metrics-daemon-tw45n" (UID: "0c059dec-0bda-4110-9050-7cbba39eb183") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.513604 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.513674 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.513693 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.513724 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.513744 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:20Z","lastTransitionTime":"2026-01-20T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.539269 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:22:22.095326033 +0000 UTC Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.573762 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:20 crc kubenswrapper[4859]: E0120 09:19:20.574062 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.617587 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.617706 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.617726 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.617851 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.617878 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:20Z","lastTransitionTime":"2026-01-20T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.721441 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.721502 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.721515 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.721544 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.721560 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:20Z","lastTransitionTime":"2026-01-20T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.824946 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.824992 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.825026 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.825044 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.825057 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:20Z","lastTransitionTime":"2026-01-20T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.928403 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.928442 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.928456 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.928475 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:20 crc kubenswrapper[4859]: I0120 09:19:20.928487 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:20Z","lastTransitionTime":"2026-01-20T09:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.030950 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.031263 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.031389 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.031508 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.031624 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:21Z","lastTransitionTime":"2026-01-20T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.135216 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.135297 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.135321 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.135347 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.135365 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:21Z","lastTransitionTime":"2026-01-20T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.238104 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.238171 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.238187 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.238212 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.238230 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:21Z","lastTransitionTime":"2026-01-20T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.324625 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.324746 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.324848 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.324907 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.325000 4859 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.325025 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:19:37.325000317 +0000 UTC m=+52.081016513 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.325062 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:37.325049058 +0000 UTC m=+52.081065284 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.325106 4859 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.325149 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.325168 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.325184 4859 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.325209 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:37.325176202 +0000 UTC m=+52.081192448 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.325247 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:37.325230153 +0000 UTC m=+52.081246459 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.340325 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.340390 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.340400 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.340416 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.340427 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:21Z","lastTransitionTime":"2026-01-20T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.426746 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.427071 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.427143 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.427170 4859 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.427269 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:37.427242195 +0000 UTC m=+52.183258411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.444076 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.444137 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.444156 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.444188 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.444210 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:21Z","lastTransitionTime":"2026-01-20T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.539594 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:03:16.010444889 +0000 UTC Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.547217 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.547271 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.547288 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.547316 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.547336 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:21Z","lastTransitionTime":"2026-01-20T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.573422 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.573434 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.573712 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.573444 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.573854 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:21 crc kubenswrapper[4859]: E0120 09:19:21.574084 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.650406 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.650468 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.650486 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.650512 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.650531 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:21Z","lastTransitionTime":"2026-01-20T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.753948 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.754014 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.754031 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.754056 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.754078 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:21Z","lastTransitionTime":"2026-01-20T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.856481 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.856548 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.856566 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.856592 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.856615 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:21Z","lastTransitionTime":"2026-01-20T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.958947 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.959021 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.959038 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.959063 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:21 crc kubenswrapper[4859]: I0120 09:19:21.959082 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:21Z","lastTransitionTime":"2026-01-20T09:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.062687 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.062752 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.062772 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.062832 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.062854 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:22Z","lastTransitionTime":"2026-01-20T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.165737 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.165827 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.165840 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.165858 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.165870 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:22Z","lastTransitionTime":"2026-01-20T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.268314 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.268392 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.268411 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.268439 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.268459 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:22Z","lastTransitionTime":"2026-01-20T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.371073 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.371132 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.371150 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.371176 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.371195 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:22Z","lastTransitionTime":"2026-01-20T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.438345 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:22 crc kubenswrapper[4859]: E0120 09:19:22.438487 4859 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:22 crc kubenswrapper[4859]: E0120 09:19:22.438562 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs podName:0c059dec-0bda-4110-9050-7cbba39eb183 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:26.438544017 +0000 UTC m=+41.194560203 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs") pod "network-metrics-daemon-tw45n" (UID: "0c059dec-0bda-4110-9050-7cbba39eb183") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.474135 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.474195 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.474212 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.474237 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.474258 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:22Z","lastTransitionTime":"2026-01-20T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.540156 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:27:40.196847941 +0000 UTC Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.572843 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:22 crc kubenswrapper[4859]: E0120 09:19:22.573036 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.577303 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.577371 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.577398 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.577431 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.577456 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:22Z","lastTransitionTime":"2026-01-20T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.679854 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.679919 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.679936 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.679962 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.679978 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:22Z","lastTransitionTime":"2026-01-20T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.815095 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.815188 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.815209 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.815232 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.815249 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:22Z","lastTransitionTime":"2026-01-20T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.918177 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.918231 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.918252 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.918285 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:22 crc kubenswrapper[4859]: I0120 09:19:22.918310 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:22Z","lastTransitionTime":"2026-01-20T09:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.021326 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.021421 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.021437 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.021457 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.021471 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:23Z","lastTransitionTime":"2026-01-20T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.125270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.125347 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.125363 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.125383 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.125396 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:23Z","lastTransitionTime":"2026-01-20T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.228373 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.228411 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.228423 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.228440 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.228454 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:23Z","lastTransitionTime":"2026-01-20T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.332122 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.332188 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.332206 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.332231 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.332250 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:23Z","lastTransitionTime":"2026-01-20T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.435026 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.435105 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.435128 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.435160 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.435183 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:23Z","lastTransitionTime":"2026-01-20T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.538330 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.538391 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.538410 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.538435 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.538454 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:23Z","lastTransitionTime":"2026-01-20T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.540767 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:05:02.892747247 +0000 UTC Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.573380 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.573380 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:23 crc kubenswrapper[4859]: E0120 09:19:23.573579 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.573679 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:23 crc kubenswrapper[4859]: E0120 09:19:23.573776 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:23 crc kubenswrapper[4859]: E0120 09:19:23.573890 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.641393 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.641433 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.641443 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.641460 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.641472 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:23Z","lastTransitionTime":"2026-01-20T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.744347 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.744430 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.744442 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.744468 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.744479 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:23Z","lastTransitionTime":"2026-01-20T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.847634 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.847713 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.847736 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.847770 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.847864 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:23Z","lastTransitionTime":"2026-01-20T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.950746 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.950821 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.950837 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.950857 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:23 crc kubenswrapper[4859]: I0120 09:19:23.950875 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:23Z","lastTransitionTime":"2026-01-20T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.054591 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.054673 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.054697 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.054729 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.054751 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:24Z","lastTransitionTime":"2026-01-20T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.158105 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.158148 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.158159 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.158180 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.158193 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:24Z","lastTransitionTime":"2026-01-20T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.260567 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.260648 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.260673 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.260703 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.260725 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:24Z","lastTransitionTime":"2026-01-20T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.362875 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.362948 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.362968 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.362995 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.363013 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:24Z","lastTransitionTime":"2026-01-20T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.465929 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.466007 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.466031 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.466065 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.466089 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:24Z","lastTransitionTime":"2026-01-20T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.541962 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:22:05.198793273 +0000 UTC Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.568594 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.568630 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.568640 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.568660 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.568671 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:24Z","lastTransitionTime":"2026-01-20T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.573035 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:24 crc kubenswrapper[4859]: E0120 09:19:24.573235 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.671285 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.671346 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.671367 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.671393 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.671413 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:24Z","lastTransitionTime":"2026-01-20T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.774373 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.774551 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.774574 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.774624 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.774666 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:24Z","lastTransitionTime":"2026-01-20T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.877929 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.877997 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.878016 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.878041 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.878060 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:24Z","lastTransitionTime":"2026-01-20T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.981181 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.981243 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.981260 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.981284 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:24 crc kubenswrapper[4859]: I0120 09:19:24.981301 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:24Z","lastTransitionTime":"2026-01-20T09:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.085950 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.086265 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.086470 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.086638 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.086865 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:25Z","lastTransitionTime":"2026-01-20T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.190745 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.190830 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.190845 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.190869 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.190880 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:25Z","lastTransitionTime":"2026-01-20T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.294658 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.294718 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.294736 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.294765 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.294838 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:25Z","lastTransitionTime":"2026-01-20T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.397928 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.397980 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.397996 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.398019 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.398038 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:25Z","lastTransitionTime":"2026-01-20T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.500749 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.500837 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.500855 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.500879 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.500897 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:25Z","lastTransitionTime":"2026-01-20T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.542126 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:33:55.084410406 +0000 UTC Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.572917 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.572948 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.573010 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:25 crc kubenswrapper[4859]: E0120 09:19:25.573125 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:25 crc kubenswrapper[4859]: E0120 09:19:25.573195 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:25 crc kubenswrapper[4859]: E0120 09:19:25.573298 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.596827 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.603462 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.603529 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.603554 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.603584 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.603608 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:25Z","lastTransitionTime":"2026-01-20T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.610631 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.631201 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4984c2a73804eb2ab7cbf374d72eb98f464cc3a8dba4bacdf325ff32f042d298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"message\\\":\\\"20 09:19:15.308920 6143 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309109 6143 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309319 6143 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:15.309554 6143 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 09:19:15.309606 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 09:19:15.309651 6143 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 09:19:15.309666 6143 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 09:19:15.309702 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:15.309741 6143 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 09:19:15.309758 6143 factory.go:656] Stopping watch factory\\\\nI0120 09:19:15.309760 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:15.309772 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 09:19:15.309801 6143 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"-9519d3c9c3e5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress/router-internal-default_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0120 09:19:18.521433 62\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.656103 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.673758 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.697852 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.707083 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.707123 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.707135 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.707153 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.707166 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:25Z","lastTransitionTime":"2026-01-20T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.716531 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.736735 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.754233 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.773171 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.790219 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.808102 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.810057 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.810108 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.810134 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.810162 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.810183 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:25Z","lastTransitionTime":"2026-01-20T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.824617 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.839600 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.853347 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.864414 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.879333 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.912424 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.912490 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.912503 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.912528 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:25 crc kubenswrapper[4859]: I0120 09:19:25.912541 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:25Z","lastTransitionTime":"2026-01-20T09:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.016055 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.016096 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.016106 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.016122 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.016134 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:26Z","lastTransitionTime":"2026-01-20T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.119307 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.119371 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.119389 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.119414 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.119432 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:26Z","lastTransitionTime":"2026-01-20T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.223131 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.223194 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.223210 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.223234 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.223251 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:26Z","lastTransitionTime":"2026-01-20T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.326508 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.326539 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.326551 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.326567 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.326578 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:26Z","lastTransitionTime":"2026-01-20T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.429292 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.429358 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.429375 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.429399 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.429419 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:26Z","lastTransitionTime":"2026-01-20T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.483395 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:26 crc kubenswrapper[4859]: E0120 09:19:26.483607 4859 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:26 crc kubenswrapper[4859]: E0120 09:19:26.483694 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs podName:0c059dec-0bda-4110-9050-7cbba39eb183 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:34.483670873 +0000 UTC m=+49.239687089 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs") pod "network-metrics-daemon-tw45n" (UID: "0c059dec-0bda-4110-9050-7cbba39eb183") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.532355 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.532424 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.532436 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.532456 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.532492 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:26Z","lastTransitionTime":"2026-01-20T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.543218 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:57:24.190578655 +0000 UTC Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.573151 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:26 crc kubenswrapper[4859]: E0120 09:19:26.573355 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.636534 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.636584 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.636603 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.636629 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.636648 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:26Z","lastTransitionTime":"2026-01-20T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.739826 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.739888 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.739908 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.739931 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.739953 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:26Z","lastTransitionTime":"2026-01-20T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.843101 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.843486 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.843667 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.843961 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.844178 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:26Z","lastTransitionTime":"2026-01-20T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.946777 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.946881 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.946900 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.946928 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:26 crc kubenswrapper[4859]: I0120 09:19:26.946948 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:26Z","lastTransitionTime":"2026-01-20T09:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.049462 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.049528 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.049550 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.049579 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.049601 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:27Z","lastTransitionTime":"2026-01-20T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.153354 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.153424 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.153444 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.153471 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.153488 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:27Z","lastTransitionTime":"2026-01-20T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.256416 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.256474 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.256498 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.256530 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.256553 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:27Z","lastTransitionTime":"2026-01-20T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.359545 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.359605 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.359622 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.359646 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.359664 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:27Z","lastTransitionTime":"2026-01-20T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.462509 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.462576 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.462593 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.462617 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.462636 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:27Z","lastTransitionTime":"2026-01-20T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.544023 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:41:07.368615641 +0000 UTC Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.565594 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.565652 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.565673 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.565702 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.565724 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:27Z","lastTransitionTime":"2026-01-20T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.573265 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.573353 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:27 crc kubenswrapper[4859]: E0120 09:19:27.573461 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.573283 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:27 crc kubenswrapper[4859]: E0120 09:19:27.573661 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:27 crc kubenswrapper[4859]: E0120 09:19:27.573914 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.668558 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.668605 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.668616 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.668635 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.668648 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:27Z","lastTransitionTime":"2026-01-20T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.771814 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.771880 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.771901 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.771926 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.771944 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:27Z","lastTransitionTime":"2026-01-20T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.874918 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.874999 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.875020 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.875439 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.875491 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:27Z","lastTransitionTime":"2026-01-20T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.978559 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.978638 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.978656 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.978687 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:27 crc kubenswrapper[4859]: I0120 09:19:27.978705 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:27Z","lastTransitionTime":"2026-01-20T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.082008 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.082072 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.082091 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.082118 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.082139 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.184893 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.184949 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.184970 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.184998 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.185014 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.288307 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.288391 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.288416 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.288454 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.288473 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.390707 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.390746 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.390757 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.390773 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.390820 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.493927 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.493970 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.493983 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.494002 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.494015 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.544328 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:06:55.202560396 +0000 UTC Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.573051 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:28 crc kubenswrapper[4859]: E0120 09:19:28.573254 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.597549 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.597615 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.597632 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.597664 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.597689 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.701252 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.701307 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.701323 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.701344 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.701361 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.804553 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.804663 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.804683 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.804749 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.804769 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.908697 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.908752 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.908773 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.908820 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.908870 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.922654 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.922715 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.922736 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.922763 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.922825 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: E0120 09:19:28.945959 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:28Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.951310 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.951386 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.951410 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.951442 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.951466 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: E0120 09:19:28.974479 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:28Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.978656 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.978705 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.978716 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.978737 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.978753 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:28 crc kubenswrapper[4859]: E0120 09:19:28.994312 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:28Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.999471 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.999537 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.999556 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.999584 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:28 crc kubenswrapper[4859]: I0120 09:19:28.999604 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:28Z","lastTransitionTime":"2026-01-20T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: E0120 09:19:29.015313 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:29Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.019251 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.019302 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.019331 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.019359 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.019373 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: E0120 09:19:29.033280 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:29Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:29 crc kubenswrapper[4859]: E0120 09:19:29.033515 4859 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.035191 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.035225 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.035238 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.035254 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.035267 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.138328 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.138392 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.138412 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.138438 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.138454 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.241404 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.241471 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.241494 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.241523 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.241544 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.344766 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.344880 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.344905 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.344942 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.344965 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.447491 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.447572 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.447596 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.447621 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.447640 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.544903 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:46:09.48529565 +0000 UTC Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.550559 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.550630 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.550652 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.550678 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.550696 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.573023 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.573054 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:29 crc kubenswrapper[4859]: E0120 09:19:29.573263 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.573354 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:29 crc kubenswrapper[4859]: E0120 09:19:29.573486 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:29 crc kubenswrapper[4859]: E0120 09:19:29.573577 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.654120 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.654180 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.654212 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.654244 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.654268 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.757357 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.757423 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.757446 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.757476 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.757498 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.860980 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.861036 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.861052 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.861075 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.861092 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.963577 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.963646 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.963663 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.963689 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:29 crc kubenswrapper[4859]: I0120 09:19:29.963708 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:29Z","lastTransitionTime":"2026-01-20T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.067192 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.067256 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.067273 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.067297 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.067318 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:30Z","lastTransitionTime":"2026-01-20T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.170449 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.170517 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.170534 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.170561 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.170582 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:30Z","lastTransitionTime":"2026-01-20T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.274459 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.274537 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.274549 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.274567 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.274580 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:30Z","lastTransitionTime":"2026-01-20T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.377237 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.377295 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.377306 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.377326 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.377339 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:30Z","lastTransitionTime":"2026-01-20T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.479961 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.480030 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.480047 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.480074 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.480096 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:30Z","lastTransitionTime":"2026-01-20T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.546061 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:22:46.360143071 +0000 UTC Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.573519 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:30 crc kubenswrapper[4859]: E0120 09:19:30.573716 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.574770 4859 scope.go:117] "RemoveContainer" containerID="f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.583570 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.583827 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.583988 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.584161 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.584300 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:30Z","lastTransitionTime":"2026-01-20T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.598196 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.622302 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.650064 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.669772 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.686690 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.686752 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.686766 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.686814 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.686832 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:30Z","lastTransitionTime":"2026-01-20T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.693885 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.712815 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.744261 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"-9519d3c9c3e5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress/router-internal-default_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0120 09:19:18.521433 62\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.768161 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.782708 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.789917 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.789977 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.789995 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.790021 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.790041 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:30Z","lastTransitionTime":"2026-01-20T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.799930 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.815900 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.837094 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.854742 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.872088 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.892756 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.892871 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.892947 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.892987 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.893011 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:30Z","lastTransitionTime":"2026-01-20T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.893108 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.909139 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.926512 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.937880 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/1.log" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.941761 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.941953 4859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.964442 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.982737 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.997140 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.997179 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.997188 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.997208 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.997228 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:30Z","lastTransitionTime":"2026-01-20T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:30 crc kubenswrapper[4859]: I0120 09:19:30.999404 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.021191 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.033631 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.059573 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.073331 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.096591 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"-9519d3c9c3e5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress/router-internal-default_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0120 09:19:18.521433 62\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.101541 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.101616 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.101637 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.101670 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.101692 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:31Z","lastTransitionTime":"2026-01-20T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.113227 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.131194 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.148563 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.167768 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.182127 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.194754 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.205064 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.205111 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.205127 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.205149 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.205167 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:31Z","lastTransitionTime":"2026-01-20T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.212206 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.226998 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.243989 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.318319 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.318385 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.318400 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.318423 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.318441 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:31Z","lastTransitionTime":"2026-01-20T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.420405 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.420447 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.420458 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.420491 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.420504 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:31Z","lastTransitionTime":"2026-01-20T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.524436 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.524907 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.524933 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.524964 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.524986 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:31Z","lastTransitionTime":"2026-01-20T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.546406 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:47:18.965463134 +0000 UTC Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.573413 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.573482 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:31 crc kubenswrapper[4859]: E0120 09:19:31.573539 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.573413 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:31 crc kubenswrapper[4859]: E0120 09:19:31.573678 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:31 crc kubenswrapper[4859]: E0120 09:19:31.573850 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.627493 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.627539 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.627553 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.627573 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.627592 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:31Z","lastTransitionTime":"2026-01-20T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.730538 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.730608 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.730691 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.730778 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.730841 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:31Z","lastTransitionTime":"2026-01-20T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.833618 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.833682 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.833700 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.833726 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.833743 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:31Z","lastTransitionTime":"2026-01-20T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.937005 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.937056 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.937072 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.937097 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.937116 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:31Z","lastTransitionTime":"2026-01-20T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.947050 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/2.log" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.948103 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/1.log" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.951131 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982" exitCode=1 Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.951193 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982"} Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.951255 4859 scope.go:117] "RemoveContainer" containerID="f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.952520 4859 scope.go:117] "RemoveContainer" containerID="e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982" Jan 20 09:19:31 crc kubenswrapper[4859]: E0120 09:19:31.952843 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.975466 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:31 crc kubenswrapper[4859]: I0120 09:19:31.991200 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.010799 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.029761 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.040247 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.040296 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.040313 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.040338 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.040381 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:32Z","lastTransitionTime":"2026-01-20T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.050566 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.083009 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.099655 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.132139 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2667b7b735eb800fda45d776c207a47d7ce11d107f9569d6ec02c74f541b90e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"-9519d3c9c3e5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress/router-internal-default_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.176\\\\\\\", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0120 09:19:18.521433 62\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:31Z\\\",\\\"message\\\":\\\"\\\\nI0120 09:19:31.685347 6483 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685407 6483 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:31.685285 6483 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685515 6483 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 09:19:31.685583 6483 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 09:19:31.685674 6483 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:31.685434 6483 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 09:19:31.685866 6483 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 09:19:31.685893 6483 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 09:19:31.685976 6483 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 09:19:31.685992 6483 factory.go:656] Stopping watch factory\\\\nI0120 09:19:31.686132 6483 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.143505 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.143572 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.143599 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.143631 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.143655 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:32Z","lastTransitionTime":"2026-01-20T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.150001 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.167078 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.189158 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.209190 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.224699 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.243288 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.246431 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.246507 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.246524 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.246549 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.246566 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:32Z","lastTransitionTime":"2026-01-20T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.260903 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.278071 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.298440 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:32Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.349688 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.349772 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.349826 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.349860 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.349882 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:32Z","lastTransitionTime":"2026-01-20T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.452057 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.452113 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.452129 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.452153 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.452169 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:32Z","lastTransitionTime":"2026-01-20T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.546917 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:24:25.894020976 +0000 UTC Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.555122 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.555199 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.555223 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.555251 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.555272 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:32Z","lastTransitionTime":"2026-01-20T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.573599 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:32 crc kubenswrapper[4859]: E0120 09:19:32.573805 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.657804 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.657855 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.657872 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.657892 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.657901 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:32Z","lastTransitionTime":"2026-01-20T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.760098 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.760153 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.760169 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.760192 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.760211 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:32Z","lastTransitionTime":"2026-01-20T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.863238 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.863305 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.863323 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.863349 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.863366 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:32Z","lastTransitionTime":"2026-01-20T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.959044 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/2.log" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.965643 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.965775 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.965842 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.965875 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:32 crc kubenswrapper[4859]: I0120 09:19:32.965897 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:32Z","lastTransitionTime":"2026-01-20T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.069264 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.069323 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.069344 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.069372 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.069395 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:33Z","lastTransitionTime":"2026-01-20T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.172731 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.172830 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.172855 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.172882 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.172901 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:33Z","lastTransitionTime":"2026-01-20T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.276338 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.276393 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.276410 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.276435 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.276453 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:33Z","lastTransitionTime":"2026-01-20T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.379505 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.379578 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.379597 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.379626 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.379644 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:33Z","lastTransitionTime":"2026-01-20T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.482300 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.482372 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.482388 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.482414 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.482432 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:33Z","lastTransitionTime":"2026-01-20T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.547870 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:12:23.396313945 +0000 UTC Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.573004 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.573102 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:33 crc kubenswrapper[4859]: E0120 09:19:33.573198 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.573257 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:33 crc kubenswrapper[4859]: E0120 09:19:33.573461 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:33 crc kubenswrapper[4859]: E0120 09:19:33.573690 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.585773 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.585869 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.585888 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.585914 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.585936 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:33Z","lastTransitionTime":"2026-01-20T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.688621 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.688691 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.688710 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.688744 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.688769 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:33Z","lastTransitionTime":"2026-01-20T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.791988 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.792054 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.792071 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.792109 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.792130 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:33Z","lastTransitionTime":"2026-01-20T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.894728 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.894816 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.894839 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.894899 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.894922 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:33Z","lastTransitionTime":"2026-01-20T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.912598 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.913495 4859 scope.go:117] "RemoveContainer" containerID="e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982" Jan 20 09:19:33 crc kubenswrapper[4859]: E0120 09:19:33.913690 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.936494 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:33Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.963479 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:33Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.984239 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:33Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.998651 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.998726 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.998745 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.998771 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:33 crc kubenswrapper[4859]: I0120 09:19:33.998832 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:33Z","lastTransitionTime":"2026-01-20T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.008246 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.030272 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.063999 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.082853 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.101669 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.101738 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.101821 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.101850 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.101868 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:34Z","lastTransitionTime":"2026-01-20T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.115457 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:31Z\\\",\\\"message\\\":\\\"\\\\nI0120 09:19:31.685347 6483 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685407 6483 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:31.685285 6483 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685515 6483 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 09:19:31.685583 6483 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 09:19:31.685674 6483 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:31.685434 6483 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 09:19:31.685866 6483 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 09:19:31.685893 6483 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 09:19:31.685976 6483 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 09:19:31.685992 6483 factory.go:656] Stopping watch factory\\\\nI0120 09:19:31.686132 6483 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.138777 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.159376 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.179528 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.199906 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.205338 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.205409 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.205436 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.205470 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.205493 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:34Z","lastTransitionTime":"2026-01-20T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.219022 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.235078 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.254482 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.273289 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.293420 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:34Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.308400 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.308473 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.308485 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.308505 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.308519 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:34Z","lastTransitionTime":"2026-01-20T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.411691 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.411775 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.411861 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.411892 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.411916 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:34Z","lastTransitionTime":"2026-01-20T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.515004 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.515080 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.515102 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.515134 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.515159 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:34Z","lastTransitionTime":"2026-01-20T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.548021 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:40:53.87457815 +0000 UTC Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.573438 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:34 crc kubenswrapper[4859]: E0120 09:19:34.573596 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.578171 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:34 crc kubenswrapper[4859]: E0120 09:19:34.578406 4859 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:34 crc kubenswrapper[4859]: E0120 09:19:34.578550 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs podName:0c059dec-0bda-4110-9050-7cbba39eb183 nodeName:}" failed. No retries permitted until 2026-01-20 09:19:50.578515801 +0000 UTC m=+65.334532017 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs") pod "network-metrics-daemon-tw45n" (UID: "0c059dec-0bda-4110-9050-7cbba39eb183") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.618933 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.618999 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.619017 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.619043 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.619067 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:34Z","lastTransitionTime":"2026-01-20T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.722639 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.722750 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.722773 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.722850 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.723066 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:34Z","lastTransitionTime":"2026-01-20T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.826284 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.826365 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.826380 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.826405 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.826420 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:34Z","lastTransitionTime":"2026-01-20T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.930111 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.930192 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.930212 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.930237 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:34 crc kubenswrapper[4859]: I0120 09:19:34.930255 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:34Z","lastTransitionTime":"2026-01-20T09:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.033144 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.033209 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.033257 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.033283 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.033301 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:35Z","lastTransitionTime":"2026-01-20T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.137350 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.137434 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.137458 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.137489 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.137513 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:35Z","lastTransitionTime":"2026-01-20T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.241314 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.241378 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.241392 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.241413 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.241430 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:35Z","lastTransitionTime":"2026-01-20T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.344187 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.344240 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.344257 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.344283 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.344302 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:35Z","lastTransitionTime":"2026-01-20T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.446866 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.446933 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.446952 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.446978 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.446998 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:35Z","lastTransitionTime":"2026-01-20T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.548240 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:32:25.964627435 +0000 UTC Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.550449 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.550513 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.550532 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.550561 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.550580 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:35Z","lastTransitionTime":"2026-01-20T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.573137 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.573211 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.573157 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:35 crc kubenswrapper[4859]: E0120 09:19:35.573414 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:35 crc kubenswrapper[4859]: E0120 09:19:35.573542 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:35 crc kubenswrapper[4859]: E0120 09:19:35.573659 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.590597 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.605444 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.620325 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.634773 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.652816 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.652887 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.652904 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.652930 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.652949 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:35Z","lastTransitionTime":"2026-01-20T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.653685 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.671842 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.691226 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.707030 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.725430 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.747744 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.756252 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.756312 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.756330 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.756355 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.756372 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:35Z","lastTransitionTime":"2026-01-20T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.760469 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.790551 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.805345 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.832164 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:31Z\\\",\\\"message\\\":\\\"\\\\nI0120 09:19:31.685347 6483 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685407 6483 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:31.685285 6483 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685515 6483 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 09:19:31.685583 6483 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 09:19:31.685674 6483 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:31.685434 6483 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 09:19:31.685866 6483 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 09:19:31.685893 6483 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 09:19:31.685976 6483 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 09:19:31.685992 6483 factory.go:656] Stopping watch factory\\\\nI0120 09:19:31.686132 6483 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.854979 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.859079 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.859129 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.859144 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.859165 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.859178 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:35Z","lastTransitionTime":"2026-01-20T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.874525 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.892294 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:35Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.962762 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.962876 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.962901 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.962934 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:35 crc kubenswrapper[4859]: I0120 09:19:35.962960 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:35Z","lastTransitionTime":"2026-01-20T09:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.066336 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.066398 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.066410 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.066429 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.066442 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:36Z","lastTransitionTime":"2026-01-20T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.169362 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.169415 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.169428 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.169447 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.169461 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:36Z","lastTransitionTime":"2026-01-20T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.272964 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.273027 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.273045 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.273070 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.273088 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:36Z","lastTransitionTime":"2026-01-20T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.376043 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.376102 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.376118 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.376141 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.376156 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:36Z","lastTransitionTime":"2026-01-20T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.479519 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.479600 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.479620 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.479647 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.479670 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:36Z","lastTransitionTime":"2026-01-20T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.548888 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:47:48.815366122 +0000 UTC Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.573356 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:36 crc kubenswrapper[4859]: E0120 09:19:36.573563 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.582518 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.582578 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.582599 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.582673 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.582705 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:36Z","lastTransitionTime":"2026-01-20T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.686512 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.686590 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.686616 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.686650 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.686674 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:36Z","lastTransitionTime":"2026-01-20T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.790498 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.790564 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.790582 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.790607 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.790626 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:36Z","lastTransitionTime":"2026-01-20T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.893383 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.893447 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.893472 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.893498 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.893517 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:36Z","lastTransitionTime":"2026-01-20T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.997227 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.997300 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.997316 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.997335 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:36 crc kubenswrapper[4859]: I0120 09:19:36.997347 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:36Z","lastTransitionTime":"2026-01-20T09:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.100301 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.100334 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.100345 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.100360 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.100371 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:37Z","lastTransitionTime":"2026-01-20T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.137350 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.149587 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.158079 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.174862 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.187570 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.202480 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.202632 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.202673 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.202687 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.202710 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.202725 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:37Z","lastTransitionTime":"2026-01-20T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.213644 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.235678 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.247077 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.265834 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:31Z\\\",\\\"message\\\":\\\"\\\\nI0120 09:19:31.685347 6483 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685407 6483 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:31.685285 6483 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685515 6483 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 09:19:31.685583 6483 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 09:19:31.685674 6483 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:31.685434 6483 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 09:19:31.685866 6483 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 09:19:31.685893 6483 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 09:19:31.685976 6483 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 09:19:31.685992 6483 factory.go:656] Stopping watch factory\\\\nI0120 09:19:31.686132 6483 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.284700 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.302578 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.305393 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.305495 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.305521 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.305552 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.305578 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:37Z","lastTransitionTime":"2026-01-20T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.319815 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.335457 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.351237 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.367011 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.386485 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.403397 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.407648 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.407679 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.407687 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.407701 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.407711 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:37Z","lastTransitionTime":"2026-01-20T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.408485 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.408633 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:20:09.408614792 +0000 UTC m=+84.164630968 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.408945 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.409154 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.409365 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.409211 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.409640 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.409758 4859 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.410094 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 09:20:09.41006431 +0000 UTC m=+84.166080496 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.409311 4859 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.410373 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:20:09.410357547 +0000 UTC m=+84.166373733 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.409483 4859 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.410640 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:20:09.410624575 +0000 UTC m=+84.166640801 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.418396 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:37Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.509947 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.510261 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.510335 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.510361 4859 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.510477 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 09:20:09.510444839 +0000 UTC m=+84.266461045 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.511643 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.511692 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.511708 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.511733 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.511751 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:37Z","lastTransitionTime":"2026-01-20T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.549938 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:20:10.025788399 +0000 UTC Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.573432 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.573511 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.573447 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.573649 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.573824 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:37 crc kubenswrapper[4859]: E0120 09:19:37.574061 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.614882 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.614936 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.614954 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.614979 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.614997 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:37Z","lastTransitionTime":"2026-01-20T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.717567 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.717632 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.717657 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.717688 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.717711 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:37Z","lastTransitionTime":"2026-01-20T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.820487 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.820546 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.820557 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.820578 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.820592 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:37Z","lastTransitionTime":"2026-01-20T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.924134 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.924200 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.924219 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.924244 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:37 crc kubenswrapper[4859]: I0120 09:19:37.924265 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:37Z","lastTransitionTime":"2026-01-20T09:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.027613 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.027694 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.027720 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.027751 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.027774 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:38Z","lastTransitionTime":"2026-01-20T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.131319 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.131384 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.131401 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.131427 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.131444 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:38Z","lastTransitionTime":"2026-01-20T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.234708 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.234770 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.234819 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.234844 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.234862 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:38Z","lastTransitionTime":"2026-01-20T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.338583 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.338675 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.338693 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.338719 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.338739 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:38Z","lastTransitionTime":"2026-01-20T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.442421 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.442508 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.442528 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.442557 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.442576 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:38Z","lastTransitionTime":"2026-01-20T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.546259 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.546327 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.546351 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.546384 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.546407 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:38Z","lastTransitionTime":"2026-01-20T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.550957 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:27:43.597688071 +0000 UTC Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.572870 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:38 crc kubenswrapper[4859]: E0120 09:19:38.573083 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.649660 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.649715 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.649731 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.649755 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.649775 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:38Z","lastTransitionTime":"2026-01-20T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.752959 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.753026 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.753049 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.753082 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.753100 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:38Z","lastTransitionTime":"2026-01-20T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.856709 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.856822 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.856844 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.856870 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.856888 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:38Z","lastTransitionTime":"2026-01-20T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.959838 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.959913 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.959940 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.959973 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:38 crc kubenswrapper[4859]: I0120 09:19:38.959995 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:38Z","lastTransitionTime":"2026-01-20T09:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.063079 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.063151 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.063168 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.063195 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.063213 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.167113 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.167212 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.167602 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.167697 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.167961 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.185221 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.185278 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.185296 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.185320 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.185339 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: E0120 09:19:39.201004 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:39Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.206240 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.206334 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.206360 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.206391 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.206418 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: E0120 09:19:39.222604 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:39Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.226765 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.226826 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.226835 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.226847 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.226857 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: E0120 09:19:39.241947 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:39Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.246589 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.246624 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.246634 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.246650 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.246661 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: E0120 09:19:39.266150 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:39Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.270632 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.270731 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.270751 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.270777 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.270837 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: E0120 09:19:39.290757 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:39Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:39 crc kubenswrapper[4859]: E0120 09:19:39.291039 4859 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.292885 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.292928 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.292945 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.292966 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.292983 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.396451 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.396487 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.396498 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.396516 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.396527 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.499481 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.499545 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.499562 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.499591 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.499613 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.551269 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:08:04.014115448 +0000 UTC Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.572948 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.572996 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:39 crc kubenswrapper[4859]: E0120 09:19:39.573080 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:39 crc kubenswrapper[4859]: E0120 09:19:39.573198 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.573336 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:39 crc kubenswrapper[4859]: E0120 09:19:39.573401 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.602906 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.602945 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.602960 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.602979 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.602995 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.705994 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.706052 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.706069 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.706098 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.706117 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.808536 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.808593 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.808610 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.808635 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.808652 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.911475 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.911532 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.911550 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.911578 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:39 crc kubenswrapper[4859]: I0120 09:19:39.911597 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:39Z","lastTransitionTime":"2026-01-20T09:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.013689 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.013749 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.013766 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.013831 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.013859 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:40Z","lastTransitionTime":"2026-01-20T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.117385 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.117434 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.117445 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.117465 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.117478 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:40Z","lastTransitionTime":"2026-01-20T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.220090 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.220125 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.220133 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.220146 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.220157 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:40Z","lastTransitionTime":"2026-01-20T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.323433 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.323515 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.323551 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.323581 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.323605 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:40Z","lastTransitionTime":"2026-01-20T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.426594 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.426679 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.426703 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.426730 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.426748 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:40Z","lastTransitionTime":"2026-01-20T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.530329 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.530399 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.530415 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.530446 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.530464 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:40Z","lastTransitionTime":"2026-01-20T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.551949 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:04:02.924007445 +0000 UTC Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.573392 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:40 crc kubenswrapper[4859]: E0120 09:19:40.573603 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.633667 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.633745 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.633763 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.633859 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.633882 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:40Z","lastTransitionTime":"2026-01-20T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.736714 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.736815 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.736833 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.736861 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.736879 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:40Z","lastTransitionTime":"2026-01-20T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.840064 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.840138 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.840155 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.840180 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.840199 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:40Z","lastTransitionTime":"2026-01-20T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.942893 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.942956 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.942973 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.942998 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:40 crc kubenswrapper[4859]: I0120 09:19:40.943016 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:40Z","lastTransitionTime":"2026-01-20T09:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.045869 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.045951 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.045976 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.046007 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.046030 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:41Z","lastTransitionTime":"2026-01-20T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.149565 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.149635 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.149653 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.149680 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.149699 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:41Z","lastTransitionTime":"2026-01-20T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.252823 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.252930 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.252951 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.252979 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.252999 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:41Z","lastTransitionTime":"2026-01-20T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.357088 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.357146 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.357164 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.357189 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.357210 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:41Z","lastTransitionTime":"2026-01-20T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.460055 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.460571 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.460600 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.460633 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.460655 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:41Z","lastTransitionTime":"2026-01-20T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.552090 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:54:04.14722735 +0000 UTC Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.563773 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.563866 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.563882 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.563907 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.563924 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:41Z","lastTransitionTime":"2026-01-20T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.573339 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.573354 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:41 crc kubenswrapper[4859]: E0120 09:19:41.573529 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:41 crc kubenswrapper[4859]: E0120 09:19:41.573880 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.573662 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:41 crc kubenswrapper[4859]: E0120 09:19:41.574308 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.667556 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.667630 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.667655 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.667685 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.667708 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:41Z","lastTransitionTime":"2026-01-20T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.770448 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.770501 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.770516 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.770537 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.770551 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:41Z","lastTransitionTime":"2026-01-20T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.872610 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.872659 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.872676 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.872698 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.872716 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:41Z","lastTransitionTime":"2026-01-20T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.975187 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.975234 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.975251 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.975274 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:41 crc kubenswrapper[4859]: I0120 09:19:41.975291 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:41Z","lastTransitionTime":"2026-01-20T09:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.078359 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.078403 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.078415 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.078431 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.078445 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:42Z","lastTransitionTime":"2026-01-20T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.181419 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.181505 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.181549 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.181585 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.181609 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:42Z","lastTransitionTime":"2026-01-20T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.285649 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.285720 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.285733 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.285762 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.285776 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:42Z","lastTransitionTime":"2026-01-20T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.389402 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.389469 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.389486 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.389508 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.389520 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:42Z","lastTransitionTime":"2026-01-20T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.492355 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.492432 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.492449 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.492474 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.492492 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:42Z","lastTransitionTime":"2026-01-20T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.552737 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:27:45.817318432 +0000 UTC Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.572609 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:42 crc kubenswrapper[4859]: E0120 09:19:42.572825 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.595843 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.595905 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.595922 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.595946 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.595966 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:42Z","lastTransitionTime":"2026-01-20T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.698916 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.698982 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.698998 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.699022 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.699040 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:42Z","lastTransitionTime":"2026-01-20T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.802555 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.802622 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.802647 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.802678 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.802704 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:42Z","lastTransitionTime":"2026-01-20T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.905708 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.905753 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.905769 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.905812 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:42 crc kubenswrapper[4859]: I0120 09:19:42.905828 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:42Z","lastTransitionTime":"2026-01-20T09:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.008602 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.008665 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.008687 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.008713 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.008731 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:43Z","lastTransitionTime":"2026-01-20T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.111983 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.112037 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.112055 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.112078 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.112096 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:43Z","lastTransitionTime":"2026-01-20T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.215583 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.215659 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.215678 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.215706 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.215725 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:43Z","lastTransitionTime":"2026-01-20T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.318840 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.318908 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.318925 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.318950 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.318969 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:43Z","lastTransitionTime":"2026-01-20T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.422369 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.422435 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.422452 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.422476 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.422495 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:43Z","lastTransitionTime":"2026-01-20T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.527234 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.527340 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.527424 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.527517 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.527542 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:43Z","lastTransitionTime":"2026-01-20T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.553409 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:06:40.040932189 +0000 UTC Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.572811 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.572871 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.572965 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:43 crc kubenswrapper[4859]: E0120 09:19:43.572967 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:43 crc kubenswrapper[4859]: E0120 09:19:43.573106 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:43 crc kubenswrapper[4859]: E0120 09:19:43.573256 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.630396 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.630431 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.630440 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.630454 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.630464 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:43Z","lastTransitionTime":"2026-01-20T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.733933 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.734171 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.734189 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.734211 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.734228 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:43Z","lastTransitionTime":"2026-01-20T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.837369 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.837424 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.837440 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.837467 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.837483 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:43Z","lastTransitionTime":"2026-01-20T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.940511 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.940581 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.940604 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.940637 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:43 crc kubenswrapper[4859]: I0120 09:19:43.940661 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:43Z","lastTransitionTime":"2026-01-20T09:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.044310 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.044424 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.044439 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.044458 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.044472 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:44Z","lastTransitionTime":"2026-01-20T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.147725 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.147806 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.147825 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.147842 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.147854 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:44Z","lastTransitionTime":"2026-01-20T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.250890 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.250935 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.250944 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.250962 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.250972 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:44Z","lastTransitionTime":"2026-01-20T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.355066 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.355168 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.355189 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.355213 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.355230 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:44Z","lastTransitionTime":"2026-01-20T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.458287 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.458340 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.458351 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.458370 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.458382 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:44Z","lastTransitionTime":"2026-01-20T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.554581 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:32:57.935476476 +0000 UTC Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.561633 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.561692 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.561716 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.561747 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.561769 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:44Z","lastTransitionTime":"2026-01-20T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.573465 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:44 crc kubenswrapper[4859]: E0120 09:19:44.573641 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.664452 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.664511 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.664527 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.664551 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.664569 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:44Z","lastTransitionTime":"2026-01-20T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.767076 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.767126 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.767144 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.767167 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.767187 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:44Z","lastTransitionTime":"2026-01-20T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.888193 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.888243 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.888255 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.888282 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.888308 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:44Z","lastTransitionTime":"2026-01-20T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.990809 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.990865 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.990881 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.990904 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:44 crc kubenswrapper[4859]: I0120 09:19:44.990918 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:44Z","lastTransitionTime":"2026-01-20T09:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.094534 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.094598 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.094616 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.094641 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.094661 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:45Z","lastTransitionTime":"2026-01-20T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.197563 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.197629 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.197647 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.197673 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.197694 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:45Z","lastTransitionTime":"2026-01-20T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.300215 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.300287 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.300305 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.300330 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.300349 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:45Z","lastTransitionTime":"2026-01-20T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.403662 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.403739 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.403757 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.403822 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.403847 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:45Z","lastTransitionTime":"2026-01-20T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.506995 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.507054 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.507071 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.507095 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.507113 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:45Z","lastTransitionTime":"2026-01-20T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.555558 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:56:25.593856414 +0000 UTC Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.572779 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:45 crc kubenswrapper[4859]: E0120 09:19:45.573211 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.573287 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.573293 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:45 crc kubenswrapper[4859]: E0120 09:19:45.573405 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:45 crc kubenswrapper[4859]: E0120 09:19:45.573509 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.598667 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.611347 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.611420 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.611443 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.611476 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.611498 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:45Z","lastTransitionTime":"2026-01-20T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.623376 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.643396 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.666685 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.689387 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.708509 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.714894 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.714955 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.714982 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.715015 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.715039 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:45Z","lastTransitionTime":"2026-01-20T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.730217 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.747895 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.767730 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.784851 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84e312e-4e1d-4e36-aac8-ee006d0f8138\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.806203 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.818512 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.818602 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.818624 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.818651 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.818670 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:45Z","lastTransitionTime":"2026-01-20T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.827327 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.844378 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.862938 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.881414 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.913080 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:31Z\\\",\\\"message\\\":\\\"\\\\nI0120 09:19:31.685347 6483 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685407 6483 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:31.685285 6483 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685515 6483 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 09:19:31.685583 6483 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 09:19:31.685674 6483 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:31.685434 6483 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 09:19:31.685866 6483 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 09:19:31.685893 6483 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 09:19:31.685976 6483 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 09:19:31.685992 6483 factory.go:656] Stopping watch factory\\\\nI0120 09:19:31.686132 6483 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.921859 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.921895 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.921909 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.921936 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.921949 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:45Z","lastTransitionTime":"2026-01-20T09:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.950253 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:45 crc kubenswrapper[4859]: I0120 09:19:45.967823 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:45Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.025142 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.025198 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.025214 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.025238 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.025255 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:46Z","lastTransitionTime":"2026-01-20T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.129311 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.129385 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.129404 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.129431 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.129450 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:46Z","lastTransitionTime":"2026-01-20T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.232720 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.232775 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.232819 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.232844 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.232861 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:46Z","lastTransitionTime":"2026-01-20T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.335258 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.335320 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.335338 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.335363 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.335381 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:46Z","lastTransitionTime":"2026-01-20T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.439258 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.439332 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.439351 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.439380 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.439401 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:46Z","lastTransitionTime":"2026-01-20T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.543512 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.543567 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.543608 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.543626 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.543638 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:46Z","lastTransitionTime":"2026-01-20T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.556423 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:09:38.881328599 +0000 UTC Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.573006 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:46 crc kubenswrapper[4859]: E0120 09:19:46.573210 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.646672 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.646730 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.646747 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.646772 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.646820 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:46Z","lastTransitionTime":"2026-01-20T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.749854 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.749908 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.749918 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.749937 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.749953 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:46Z","lastTransitionTime":"2026-01-20T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.853640 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.853714 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.853725 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.853743 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.853756 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:46Z","lastTransitionTime":"2026-01-20T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.956430 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.956475 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.956486 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.956527 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:46 crc kubenswrapper[4859]: I0120 09:19:46.956540 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:46Z","lastTransitionTime":"2026-01-20T09:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.059203 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.059257 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.059276 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.059296 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.059313 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:47Z","lastTransitionTime":"2026-01-20T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.161975 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.162043 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.162060 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.162087 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.162110 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:47Z","lastTransitionTime":"2026-01-20T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.265109 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.265168 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.265185 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.265209 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.265228 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:47Z","lastTransitionTime":"2026-01-20T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.369314 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.369390 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.369411 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.369442 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.369470 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:47Z","lastTransitionTime":"2026-01-20T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.472765 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.472881 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.472905 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.472934 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.472952 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:47Z","lastTransitionTime":"2026-01-20T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.557465 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:55:41.622667759 +0000 UTC Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.573096 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:47 crc kubenswrapper[4859]: E0120 09:19:47.573291 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.573357 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.573148 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:47 crc kubenswrapper[4859]: E0120 09:19:47.573516 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:47 crc kubenswrapper[4859]: E0120 09:19:47.573924 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.576131 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.576180 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.576196 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.576220 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.576252 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:47Z","lastTransitionTime":"2026-01-20T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.678471 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.678559 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.678587 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.678624 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.678655 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:47Z","lastTransitionTime":"2026-01-20T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.781096 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.781159 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.781177 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.781202 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.781220 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:47Z","lastTransitionTime":"2026-01-20T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.884456 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.884515 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.884531 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.884557 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.884574 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:47Z","lastTransitionTime":"2026-01-20T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.987948 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.988027 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.988039 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.988080 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:47 crc kubenswrapper[4859]: I0120 09:19:47.988095 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:47Z","lastTransitionTime":"2026-01-20T09:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.090911 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.091021 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.091043 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.091068 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.091087 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:48Z","lastTransitionTime":"2026-01-20T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.194625 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.194728 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.194746 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.194831 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.194850 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:48Z","lastTransitionTime":"2026-01-20T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.298055 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.298098 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.298133 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.298154 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.298166 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:48Z","lastTransitionTime":"2026-01-20T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.400990 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.401052 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.401072 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.401095 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.401179 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:48Z","lastTransitionTime":"2026-01-20T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.504751 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.504840 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.504860 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.504886 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.504904 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:48Z","lastTransitionTime":"2026-01-20T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.557579 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 03:33:30.389523557 +0000 UTC Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.573208 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:48 crc kubenswrapper[4859]: E0120 09:19:48.573409 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.607574 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.607621 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.607630 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.607645 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.607656 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:48Z","lastTransitionTime":"2026-01-20T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.710986 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.711037 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.711062 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.711092 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.711116 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:48Z","lastTransitionTime":"2026-01-20T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.814302 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.814362 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.814392 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.814418 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.814436 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:48Z","lastTransitionTime":"2026-01-20T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.917328 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.917382 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.917403 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.917431 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:48 crc kubenswrapper[4859]: I0120 09:19:48.917451 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:48Z","lastTransitionTime":"2026-01-20T09:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.020695 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.020757 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.020824 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.020861 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.020886 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.124511 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.124572 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.124590 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.124618 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.124642 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.227967 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.228412 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.228686 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.228886 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.229129 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.332019 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.332069 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.332086 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.332110 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.332128 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.434548 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.434774 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.434981 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.435134 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.435289 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.505429 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.505561 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.505649 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.505743 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.505860 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: E0120 09:19:49.522673 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:49Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.526592 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.526629 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.526639 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.526652 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.526661 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: E0120 09:19:49.540156 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:49Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.544842 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.544944 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.545007 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.545041 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.545108 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.558433 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:01:49.816819629 +0000 UTC Jan 20 09:19:49 crc kubenswrapper[4859]: E0120 09:19:49.564288 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:49Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.568117 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.568167 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.568211 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.568229 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.568238 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.573521 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.573608 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.573525 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:49 crc kubenswrapper[4859]: E0120 09:19:49.573655 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:49 crc kubenswrapper[4859]: E0120 09:19:49.573862 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:49 crc kubenswrapper[4859]: E0120 09:19:49.574100 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.575164 4859 scope.go:117] "RemoveContainer" containerID="e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982" Jan 20 09:19:49 crc kubenswrapper[4859]: E0120 09:19:49.575900 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" Jan 20 09:19:49 crc kubenswrapper[4859]: E0120 09:19:49.581835 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:49Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.585823 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.585999 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.586088 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.586178 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.586263 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: E0120 09:19:49.597699 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:49Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:49 crc kubenswrapper[4859]: E0120 09:19:49.598095 4859 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.600204 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.600243 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.600255 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.600274 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.600288 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.702614 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.702643 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.702650 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.702664 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.702673 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.810233 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.810305 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.810331 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.810380 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.810405 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.914417 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.914461 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.914474 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.914491 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:49 crc kubenswrapper[4859]: I0120 09:19:49.914504 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:49Z","lastTransitionTime":"2026-01-20T09:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.016721 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.016759 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.016770 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.016806 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.016820 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:50Z","lastTransitionTime":"2026-01-20T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.119568 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.119617 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.119631 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.119653 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.119671 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:50Z","lastTransitionTime":"2026-01-20T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.222956 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.223018 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.223027 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.223044 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.223053 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:50Z","lastTransitionTime":"2026-01-20T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.326169 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.326229 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.326246 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.326272 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.326290 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:50Z","lastTransitionTime":"2026-01-20T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.429938 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.429988 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.430000 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.430017 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.430034 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:50Z","lastTransitionTime":"2026-01-20T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.533374 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.533441 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.533461 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.533484 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.533501 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:50Z","lastTransitionTime":"2026-01-20T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.558766 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 17:35:46.441184898 +0000 UTC Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.572959 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:50 crc kubenswrapper[4859]: E0120 09:19:50.573090 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.635469 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.635509 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.635519 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.635533 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.635544 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:50Z","lastTransitionTime":"2026-01-20T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.664079 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:50 crc kubenswrapper[4859]: E0120 09:19:50.664309 4859 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:50 crc kubenswrapper[4859]: E0120 09:19:50.664456 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs podName:0c059dec-0bda-4110-9050-7cbba39eb183 nodeName:}" failed. No retries permitted until 2026-01-20 09:20:22.664419879 +0000 UTC m=+97.420436095 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs") pod "network-metrics-daemon-tw45n" (UID: "0c059dec-0bda-4110-9050-7cbba39eb183") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.738852 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.738923 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.738936 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.738974 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.738986 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:50Z","lastTransitionTime":"2026-01-20T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.841497 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.841569 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.841595 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.841622 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.841641 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:50Z","lastTransitionTime":"2026-01-20T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.946275 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.946325 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.946341 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.946363 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:50 crc kubenswrapper[4859]: I0120 09:19:50.946380 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:50Z","lastTransitionTime":"2026-01-20T09:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.056347 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.056398 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.056412 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.056488 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.056502 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:51Z","lastTransitionTime":"2026-01-20T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.158854 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.158900 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.158915 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.158939 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.158953 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:51Z","lastTransitionTime":"2026-01-20T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.261920 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.261997 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.262017 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.262041 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.262058 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:51Z","lastTransitionTime":"2026-01-20T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.364697 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.364856 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.364878 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.364903 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.364920 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:51Z","lastTransitionTime":"2026-01-20T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.466664 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.466733 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.466750 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.466776 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.466830 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:51Z","lastTransitionTime":"2026-01-20T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.559499 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 03:53:11.394797126 +0000 UTC Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.569476 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.569525 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.569537 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.569556 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.569567 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:51Z","lastTransitionTime":"2026-01-20T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.572880 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.572910 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.572908 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:51 crc kubenswrapper[4859]: E0120 09:19:51.573021 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:51 crc kubenswrapper[4859]: E0120 09:19:51.573115 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:51 crc kubenswrapper[4859]: E0120 09:19:51.573252 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.672315 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.672356 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.672367 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.672383 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.672393 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:51Z","lastTransitionTime":"2026-01-20T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.775153 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.775212 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.775233 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.775260 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.775278 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:51Z","lastTransitionTime":"2026-01-20T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.878084 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.878132 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.878143 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.878162 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.878175 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:51Z","lastTransitionTime":"2026-01-20T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.981745 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.981891 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.981917 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.981949 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:51 crc kubenswrapper[4859]: I0120 09:19:51.981973 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:51Z","lastTransitionTime":"2026-01-20T09:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.083575 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.083608 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.083618 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.083631 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.083643 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:52Z","lastTransitionTime":"2026-01-20T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.188562 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.188625 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.188644 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.188668 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.188685 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:52Z","lastTransitionTime":"2026-01-20T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.292089 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.292158 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.292177 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.292203 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.292222 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:52Z","lastTransitionTime":"2026-01-20T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.394925 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.394984 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.395000 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.395026 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.395044 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:52Z","lastTransitionTime":"2026-01-20T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.498128 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.498245 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.498269 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.498318 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.498344 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:52Z","lastTransitionTime":"2026-01-20T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.559959 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:21:38.676160056 +0000 UTC Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.573328 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:52 crc kubenswrapper[4859]: E0120 09:19:52.573559 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.600947 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.601001 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.601018 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.601084 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.601104 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:52Z","lastTransitionTime":"2026-01-20T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.704338 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.704405 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.704422 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.704451 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.704468 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:52Z","lastTransitionTime":"2026-01-20T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.807469 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.807554 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.807571 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.807596 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.807615 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:52Z","lastTransitionTime":"2026-01-20T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.910023 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.910091 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.910114 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.910144 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:52 crc kubenswrapper[4859]: I0120 09:19:52.910163 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:52Z","lastTransitionTime":"2026-01-20T09:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.012324 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.012377 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.012389 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.012406 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.012418 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:53Z","lastTransitionTime":"2026-01-20T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.037901 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xqq7l_81947dc9-599a-4d35-a9c5-2684294a3afb/kube-multus/0.log" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.037958 4859 generic.go:334] "Generic (PLEG): container finished" podID="81947dc9-599a-4d35-a9c5-2684294a3afb" containerID="c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4" exitCode=1 Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.037992 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xqq7l" event={"ID":"81947dc9-599a-4d35-a9c5-2684294a3afb","Type":"ContainerDied","Data":"c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.038419 4859 scope.go:117] "RemoveContainer" containerID="c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.059189 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.077013 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:31Z\\\",\\\"message\\\":\\\"\\\\nI0120 09:19:31.685347 6483 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685407 6483 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:31.685285 6483 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685515 6483 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 09:19:31.685583 6483 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 09:19:31.685674 6483 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:31.685434 6483 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 09:19:31.685866 6483 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 09:19:31.685893 6483 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 09:19:31.685976 6483 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 09:19:31.685992 6483 factory.go:656] Stopping watch factory\\\\nI0120 09:19:31.686132 6483 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.098922 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.110377 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.115130 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.115171 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.115205 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.115225 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.115238 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:53Z","lastTransitionTime":"2026-01-20T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.125437 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.137818 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.149439 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84e312e-4e1d-4e36-aac8-ee006d0f8138\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.163109 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.173667 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.183086 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.195447 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:52Z\\\",\\\"message\\\":\\\"2026-01-20T09:19:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa\\\\n2026-01-20T09:19:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa to /host/opt/cni/bin/\\\\n2026-01-20T09:19:06Z [verbose] multus-daemon started\\\\n2026-01-20T09:19:06Z [verbose] Readiness Indicator file check\\\\n2026-01-20T09:19:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.205457 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.214532 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.217406 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.217440 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.217451 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.217469 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.217483 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:53Z","lastTransitionTime":"2026-01-20T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.227119 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.240096 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.255163 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.264949 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.276354 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:53Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.320412 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.320528 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.320587 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.320657 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.320721 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:53Z","lastTransitionTime":"2026-01-20T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.423296 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.423336 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.423354 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.423376 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.423394 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:53Z","lastTransitionTime":"2026-01-20T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.526660 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.526715 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.526727 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.526746 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.526759 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:53Z","lastTransitionTime":"2026-01-20T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.560316 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 09:26:17.081785061 +0000 UTC Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.572730 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.572866 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:53 crc kubenswrapper[4859]: E0120 09:19:53.572911 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.572958 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:53 crc kubenswrapper[4859]: E0120 09:19:53.573066 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:53 crc kubenswrapper[4859]: E0120 09:19:53.573205 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.629844 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.630156 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.630249 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.630371 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.630470 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:53Z","lastTransitionTime":"2026-01-20T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.733306 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.733545 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.733609 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.733678 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.733755 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:53Z","lastTransitionTime":"2026-01-20T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.837054 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.837119 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.837136 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.837161 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.837179 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:53Z","lastTransitionTime":"2026-01-20T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.939038 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.939069 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.939080 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.939094 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:53 crc kubenswrapper[4859]: I0120 09:19:53.939103 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:53Z","lastTransitionTime":"2026-01-20T09:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.040670 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.040708 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.040717 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.040733 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.040744 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:54Z","lastTransitionTime":"2026-01-20T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.043023 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xqq7l_81947dc9-599a-4d35-a9c5-2684294a3afb/kube-multus/0.log" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.043110 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xqq7l" event={"ID":"81947dc9-599a-4d35-a9c5-2684294a3afb","Type":"ContainerStarted","Data":"b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.062713 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.079152 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:52Z\\\",\\\"message\\\":\\\"2026-01-20T09:19:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa\\\\n2026-01-20T09:19:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa to /host/opt/cni/bin/\\\\n2026-01-20T09:19:06Z [verbose] multus-daemon started\\\\n2026-01-20T09:19:06Z [verbose] Readiness Indicator file check\\\\n2026-01-20T09:19:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.088946 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.100414 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.110586 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84e312e-4e1d-4e36-aac8-ee006d0f8138\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.129006 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.142976 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.143011 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.143022 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.143042 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.143055 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:54Z","lastTransitionTime":"2026-01-20T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.144814 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.155735 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.172432 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.185822 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.200266 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.221324 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.240503 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.245129 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.245179 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.245196 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.245221 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.245240 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:54Z","lastTransitionTime":"2026-01-20T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.252564 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.275894 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:31Z\\\",\\\"message\\\":\\\"\\\\nI0120 09:19:31.685347 6483 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685407 6483 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:31.685285 6483 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685515 6483 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 09:19:31.685583 6483 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 09:19:31.685674 6483 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:31.685434 6483 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 09:19:31.685866 6483 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 09:19:31.685893 6483 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 09:19:31.685976 6483 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 09:19:31.685992 6483 factory.go:656] Stopping watch factory\\\\nI0120 09:19:31.686132 6483 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.294929 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.315544 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.335478 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:54Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.347862 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.348102 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.348297 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.348471 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.348807 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:54Z","lastTransitionTime":"2026-01-20T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.451253 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.451308 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.451321 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.451341 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.451354 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:54Z","lastTransitionTime":"2026-01-20T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.554295 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.554614 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.554758 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.554941 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.555069 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:54Z","lastTransitionTime":"2026-01-20T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.560629 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:06:40.691134978 +0000 UTC Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.572997 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:54 crc kubenswrapper[4859]: E0120 09:19:54.573130 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.657381 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.657657 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.657834 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.658007 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.658203 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:54Z","lastTransitionTime":"2026-01-20T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.760964 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.761018 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.761036 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.761061 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.761079 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:54Z","lastTransitionTime":"2026-01-20T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.863439 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.863495 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.863508 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.863529 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.863545 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:54Z","lastTransitionTime":"2026-01-20T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.966065 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.966127 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.966147 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.966171 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:54 crc kubenswrapper[4859]: I0120 09:19:54.966190 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:54Z","lastTransitionTime":"2026-01-20T09:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.069261 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.069319 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.069337 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.069361 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.069380 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:55Z","lastTransitionTime":"2026-01-20T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.172453 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.172521 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.172547 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.172578 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.172598 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:55Z","lastTransitionTime":"2026-01-20T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.275411 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.275454 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.275468 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.275489 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.275505 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:55Z","lastTransitionTime":"2026-01-20T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.377886 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.377948 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.377965 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.377996 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.378019 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:55Z","lastTransitionTime":"2026-01-20T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.486849 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.486911 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.486928 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.486951 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.486969 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:55Z","lastTransitionTime":"2026-01-20T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.561431 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:03:52.291346434 +0000 UTC Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.572925 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.572925 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.573147 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:55 crc kubenswrapper[4859]: E0120 09:19:55.573224 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:55 crc kubenswrapper[4859]: E0120 09:19:55.573099 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:55 crc kubenswrapper[4859]: E0120 09:19:55.573449 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.585616 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.589428 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.589464 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.589476 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.589492 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.589505 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:55Z","lastTransitionTime":"2026-01-20T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.600176 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.615063 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.626737 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84e312e-4e1d-4e36-aac8-ee006d0f8138\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.638893 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.652580 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.666429 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.681006 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:52Z\\\",\\\"message\\\":\\\"2026-01-20T09:19:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa\\\\n2026-01-20T09:19:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa to /host/opt/cni/bin/\\\\n2026-01-20T09:19:06Z [verbose] multus-daemon started\\\\n2026-01-20T09:19:06Z [verbose] Readiness Indicator file check\\\\n2026-01-20T09:19:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.692594 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.692641 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.692658 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.692684 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.692701 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:55Z","lastTransitionTime":"2026-01-20T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.696993 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.709113 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.727047 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.740771 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.756046 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.765471 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.777824 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.788340 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.795188 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.795248 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.795265 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.795293 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.795315 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:55Z","lastTransitionTime":"2026-01-20T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.810148 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:31Z\\\",\\\"message\\\":\\\"\\\\nI0120 09:19:31.685347 6483 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685407 6483 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:31.685285 6483 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685515 6483 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 09:19:31.685583 6483 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 09:19:31.685674 6483 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:31.685434 6483 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 09:19:31.685866 6483 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 09:19:31.685893 6483 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 09:19:31.685976 6483 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 09:19:31.685992 6483 factory.go:656] Stopping watch factory\\\\nI0120 09:19:31.686132 6483 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.839645 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:55Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.897664 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.897706 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.897720 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.897740 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:55 crc kubenswrapper[4859]: I0120 09:19:55.897754 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:55Z","lastTransitionTime":"2026-01-20T09:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.000678 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.000723 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.000734 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.000750 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.000763 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:56Z","lastTransitionTime":"2026-01-20T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.103070 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.103349 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.103417 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.103478 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.103544 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:56Z","lastTransitionTime":"2026-01-20T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.210966 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.211052 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.211080 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.211110 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.211143 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:56Z","lastTransitionTime":"2026-01-20T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.313925 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.314358 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.314772 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.315241 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.315858 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:56Z","lastTransitionTime":"2026-01-20T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.418767 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.419029 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.419213 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.419390 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.419527 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:56Z","lastTransitionTime":"2026-01-20T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.522133 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.522184 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.522192 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.522205 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.522213 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:56Z","lastTransitionTime":"2026-01-20T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.562596 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:37:23.270476037 +0000 UTC Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.572700 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:56 crc kubenswrapper[4859]: E0120 09:19:56.572888 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.624024 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.624107 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.624166 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.624233 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.624299 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:56Z","lastTransitionTime":"2026-01-20T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.726867 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.726922 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.726936 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.726954 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.726966 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:56Z","lastTransitionTime":"2026-01-20T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.829169 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.829203 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.829214 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.829231 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.829242 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:56Z","lastTransitionTime":"2026-01-20T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.932112 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.932175 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.932198 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.932228 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:56 crc kubenswrapper[4859]: I0120 09:19:56.932248 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:56Z","lastTransitionTime":"2026-01-20T09:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.034545 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.034582 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.034590 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.034601 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.034608 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:57Z","lastTransitionTime":"2026-01-20T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.137532 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.137611 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.137626 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.137650 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.137666 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:57Z","lastTransitionTime":"2026-01-20T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.240641 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.240697 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.240706 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.240721 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.240732 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:57Z","lastTransitionTime":"2026-01-20T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.344115 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.344455 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.344579 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.344699 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.344850 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:57Z","lastTransitionTime":"2026-01-20T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.447247 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.447293 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.447305 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.447323 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.447336 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:57Z","lastTransitionTime":"2026-01-20T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.550378 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.550446 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.550463 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.550487 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.550507 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:57Z","lastTransitionTime":"2026-01-20T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.563657 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:56:46.232963351 +0000 UTC Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.573092 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.573148 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:57 crc kubenswrapper[4859]: E0120 09:19:57.573289 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.573317 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:57 crc kubenswrapper[4859]: E0120 09:19:57.573482 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:57 crc kubenswrapper[4859]: E0120 09:19:57.573624 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.652798 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.652842 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.652858 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.652875 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.652887 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:57Z","lastTransitionTime":"2026-01-20T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.755569 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.755958 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.756099 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.756271 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.756404 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:57Z","lastTransitionTime":"2026-01-20T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.859741 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.859832 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.859850 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.860054 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.860073 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:57Z","lastTransitionTime":"2026-01-20T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.962441 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.962505 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.962516 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.962532 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:57 crc kubenswrapper[4859]: I0120 09:19:57.962562 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:57Z","lastTransitionTime":"2026-01-20T09:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.064389 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.064453 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.064478 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.064513 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.064538 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:58Z","lastTransitionTime":"2026-01-20T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.167236 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.167301 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.167322 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.167353 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.167377 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:58Z","lastTransitionTime":"2026-01-20T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.271499 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.271555 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.271577 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.271602 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.271618 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:58Z","lastTransitionTime":"2026-01-20T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.374138 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.374178 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.374193 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.374209 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.374220 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:58Z","lastTransitionTime":"2026-01-20T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.476534 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.476584 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.476601 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.476624 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.476643 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:58Z","lastTransitionTime":"2026-01-20T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.563809 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:44:09.839909508 +0000 UTC Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.573415 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:19:58 crc kubenswrapper[4859]: E0120 09:19:58.573606 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.579407 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.579461 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.579479 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.579500 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.579516 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:58Z","lastTransitionTime":"2026-01-20T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.682357 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.682411 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.682430 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.682456 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.682479 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:58Z","lastTransitionTime":"2026-01-20T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.789704 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.789737 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.789746 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.789760 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.789770 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:58Z","lastTransitionTime":"2026-01-20T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.894892 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.894941 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.894958 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.894981 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.894998 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:58Z","lastTransitionTime":"2026-01-20T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.997475 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.997525 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.997541 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.997563 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:58 crc kubenswrapper[4859]: I0120 09:19:58.997578 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:58Z","lastTransitionTime":"2026-01-20T09:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.100037 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.100100 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.100117 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.100140 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.100157 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.204147 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.204219 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.204228 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.204242 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.204251 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.307104 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.307146 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.307161 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.307180 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.307196 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.410222 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.410270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.410286 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.410308 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.410324 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.513670 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.513720 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.513736 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.513758 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.513774 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.564419 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:44:47.551705358 +0000 UTC Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.576917 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.577013 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.577202 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:19:59 crc kubenswrapper[4859]: E0120 09:19:59.577196 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:19:59 crc kubenswrapper[4859]: E0120 09:19:59.577383 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:19:59 crc kubenswrapper[4859]: E0120 09:19:59.577521 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.616199 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.616270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.616294 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.616323 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.616344 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.719905 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.719950 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.719967 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.719991 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.720008 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.822836 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.822890 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.822908 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.822931 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.822948 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.891831 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.891897 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.891917 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.891943 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.891964 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: E0120 09:19:59.915471 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:59Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.923880 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.923925 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.923942 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.923967 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.923984 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: E0120 09:19:59.950890 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:59Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.956193 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.956377 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.956397 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.956414 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.956427 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:19:59 crc kubenswrapper[4859]: E0120 09:19:59.980933 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:19:59Z is after 2025-08-24T17:21:41Z" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.985896 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.985968 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.985993 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.986024 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:19:59 crc kubenswrapper[4859]: I0120 09:19:59.986047 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:19:59Z","lastTransitionTime":"2026-01-20T09:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: E0120 09:20:00.010962 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:00Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.016524 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.016618 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.016635 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.016659 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.016677 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: E0120 09:20:00.036481 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:00Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:00 crc kubenswrapper[4859]: E0120 09:20:00.037017 4859 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.039988 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.040176 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.040313 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.040427 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.040539 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.143349 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.143833 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.144000 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.144151 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.144292 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.247556 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.247622 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.247639 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.247664 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.247684 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.350656 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.351011 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.351145 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.351272 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.351391 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.455374 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.455464 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.455483 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.455506 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.455523 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.559220 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.559303 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.559325 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.559382 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.559402 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.564739 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:56:40.824783029 +0000 UTC Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.573271 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:00 crc kubenswrapper[4859]: E0120 09:20:00.573441 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.575641 4859 scope.go:117] "RemoveContainer" containerID="e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.661997 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.662096 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.662113 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.662137 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.662155 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.765045 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.765117 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.765133 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.765155 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.765172 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.867965 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.868032 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.868051 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.868078 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.868104 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.970132 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.970166 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.970178 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.970194 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:00 crc kubenswrapper[4859]: I0120 09:20:00.970207 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:00Z","lastTransitionTime":"2026-01-20T09:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.065210 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/2.log" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.068962 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964"} Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.069652 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.073430 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.073470 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.073481 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.073497 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.073509 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:01Z","lastTransitionTime":"2026-01-20T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.084596 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84e312e-4e1d-4e36-aac8-ee006d0f8138\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.096708 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.109874 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.125243 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.139716 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:52Z\\\",\\\"message\\\":\\\"2026-01-20T09:19:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa\\\\n2026-01-20T09:19:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa to /host/opt/cni/bin/\\\\n2026-01-20T09:19:06Z [verbose] multus-daemon started\\\\n2026-01-20T09:19:06Z [verbose] Readiness Indicator file check\\\\n2026-01-20T09:19:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.156341 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.176274 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.176332 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.176348 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.176369 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.176396 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:01Z","lastTransitionTime":"2026-01-20T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.218940 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.244538 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.261090 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.276036 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.278957 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.279008 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.279024 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.279045 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.279059 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:01Z","lastTransitionTime":"2026-01-20T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.286164 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.298007 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.308467 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.329850 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:31Z\\\",\\\"message\\\":\\\"\\\\nI0120 09:19:31.685347 6483 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685407 6483 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:31.685285 6483 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685515 6483 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 09:19:31.685583 6483 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 09:19:31.685674 6483 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:31.685434 6483 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 09:19:31.685866 6483 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 09:19:31.685893 6483 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 09:19:31.685976 6483 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 09:19:31.685992 6483 factory.go:656] Stopping watch factory\\\\nI0120 09:19:31.686132 6483 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.360679 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.374246 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.381208 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.381235 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.381242 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.381255 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.381263 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:01Z","lastTransitionTime":"2026-01-20T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.386451 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.401249 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:01Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.483434 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.483490 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.483500 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.483513 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.483521 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:01Z","lastTransitionTime":"2026-01-20T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.565589 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:50:38.548851756 +0000 UTC Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.572978 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:01 crc kubenswrapper[4859]: E0120 09:20:01.573317 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.572995 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.573167 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:01 crc kubenswrapper[4859]: E0120 09:20:01.573502 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:01 crc kubenswrapper[4859]: E0120 09:20:01.573611 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.585277 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.585335 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.585354 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.585376 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.585395 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:01Z","lastTransitionTime":"2026-01-20T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.692488 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.692555 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.692574 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.692600 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.692619 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:01Z","lastTransitionTime":"2026-01-20T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.795842 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.795888 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.795905 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.795928 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.795945 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:01Z","lastTransitionTime":"2026-01-20T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.898545 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.898606 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.898624 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.898647 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:01 crc kubenswrapper[4859]: I0120 09:20:01.898664 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:01Z","lastTransitionTime":"2026-01-20T09:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.002191 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.002230 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.002242 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.002258 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.002271 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:02Z","lastTransitionTime":"2026-01-20T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.072698 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/3.log" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.073257 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/2.log" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.076238 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" exitCode=1 Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.076284 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.076318 4859 scope.go:117] "RemoveContainer" containerID="e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.077069 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:20:02 crc kubenswrapper[4859]: E0120 09:20:02.077284 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.092518 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.104153 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.104224 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.104242 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.104267 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.104285 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:02Z","lastTransitionTime":"2026-01-20T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.123869 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7acfb479d8547a338aadb3bd04a6b24555d82756824f8aaa31b54a8813c8982\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:31Z\\\",\\\"message\\\":\\\"\\\\nI0120 09:19:31.685347 6483 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685407 6483 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 09:19:31.685285 6483 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 09:19:31.685515 6483 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 09:19:31.685583 6483 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 09:19:31.685674 6483 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 09:19:31.685434 6483 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 09:19:31.685866 6483 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 09:19:31.685893 6483 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 09:19:31.685976 6483 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 09:19:31.685992 6483 factory.go:656] Stopping watch factory\\\\nI0120 09:19:31.686132 6483 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:20:01Z\\\",\\\"message\\\":\\\"GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0120 09:20:01.650755 6879 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0120 09:20:01.652191 6879 ovnkube.go:599] Stopped ovnkube\\\\nI0120 09:20:01.652239 6879 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 09:20:01.652436 6879 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.150472 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.165500 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.184335 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.204587 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.207078 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.207112 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.207122 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.207137 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.207146 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:02Z","lastTransitionTime":"2026-01-20T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.222741 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84e312e-4e1d-4e36-aac8-ee006d0f8138\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.237891 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.255323 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.270417 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.285542 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:52Z\\\",\\\"message\\\":\\\"2026-01-20T09:19:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa\\\\n2026-01-20T09:19:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa to /host/opt/cni/bin/\\\\n2026-01-20T09:19:06Z [verbose] multus-daemon started\\\\n2026-01-20T09:19:06Z [verbose] Readiness Indicator file check\\\\n2026-01-20T09:19:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.300000 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.309832 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.309897 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.309915 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.309940 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.309960 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:02Z","lastTransitionTime":"2026-01-20T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.312127 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.325577 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.339462 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.359308 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.371728 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.385702 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:02Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.413314 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.413379 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.413393 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.413412 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.413425 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:02Z","lastTransitionTime":"2026-01-20T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.515444 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.515492 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.515506 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.515524 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.515536 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:02Z","lastTransitionTime":"2026-01-20T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.566055 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:00:23.176134927 +0000 UTC Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.572708 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:02 crc kubenswrapper[4859]: E0120 09:20:02.573018 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.617849 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.617888 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.617922 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.617938 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.617951 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:02Z","lastTransitionTime":"2026-01-20T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.720864 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.720916 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.720932 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.720957 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.720973 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:02Z","lastTransitionTime":"2026-01-20T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.823902 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.823936 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.823945 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.823958 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.823966 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:02Z","lastTransitionTime":"2026-01-20T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.926531 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.926578 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.926594 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.926621 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:02 crc kubenswrapper[4859]: I0120 09:20:02.926638 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:02Z","lastTransitionTime":"2026-01-20T09:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.029733 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.029834 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.029859 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.029886 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.029904 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:03Z","lastTransitionTime":"2026-01-20T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.083217 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/3.log" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.088117 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:20:03 crc kubenswrapper[4859]: E0120 09:20:03.088401 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.110696 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.131099 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.132607 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.132667 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.132684 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.132707 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.132723 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:03Z","lastTransitionTime":"2026-01-20T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.148692 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.169115 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:52Z\\\",\\\"message\\\":\\\"2026-01-20T09:19:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa\\\\n2026-01-20T09:19:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa to /host/opt/cni/bin/\\\\n2026-01-20T09:19:06Z [verbose] multus-daemon started\\\\n2026-01-20T09:19:06Z [verbose] Readiness Indicator file check\\\\n2026-01-20T09:19:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.190867 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.207384 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.225843 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84e312e-4e1d-4e36-aac8-ee006d0f8138\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.235561 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.235606 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.235617 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.235635 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.235655 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:03Z","lastTransitionTime":"2026-01-20T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.249092 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.273429 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.287415 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.307370 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.325167 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.338969 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.339025 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.339036 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.339054 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.339064 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:03Z","lastTransitionTime":"2026-01-20T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.341246 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.357603 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.368283 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.391682 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.405029 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.427340 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:20:01Z\\\",\\\"message\\\":\\\"GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0120 09:20:01.650755 6879 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0120 09:20:01.652191 6879 ovnkube.go:599] Stopped ovnkube\\\\nI0120 09:20:01.652239 6879 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 09:20:01.652436 6879 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:20:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:03Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.441207 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.441253 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.441268 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.441287 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.441300 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:03Z","lastTransitionTime":"2026-01-20T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.544235 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.544276 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.544284 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.544298 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.544307 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:03Z","lastTransitionTime":"2026-01-20T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.566897 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:31:35.182435872 +0000 UTC Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.573377 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.573384 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.573481 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:03 crc kubenswrapper[4859]: E0120 09:20:03.573673 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:03 crc kubenswrapper[4859]: E0120 09:20:03.573824 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:03 crc kubenswrapper[4859]: E0120 09:20:03.573977 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.647958 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.648025 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.648051 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.648079 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.648102 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:03Z","lastTransitionTime":"2026-01-20T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.750971 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.751033 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.751053 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.751080 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.751101 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:03Z","lastTransitionTime":"2026-01-20T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.854919 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.854990 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.855009 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.855037 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.855057 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:03Z","lastTransitionTime":"2026-01-20T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.958298 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.958374 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.958400 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.958431 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:03 crc kubenswrapper[4859]: I0120 09:20:03.958455 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:03Z","lastTransitionTime":"2026-01-20T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.060373 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.060462 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.060481 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.060505 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.060525 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:04Z","lastTransitionTime":"2026-01-20T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.163116 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.163157 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.163173 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.163198 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.163214 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:04Z","lastTransitionTime":"2026-01-20T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.266536 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.266617 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.266637 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.266663 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.266680 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:04Z","lastTransitionTime":"2026-01-20T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.369669 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.369714 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.369725 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.369741 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.369752 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:04Z","lastTransitionTime":"2026-01-20T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.473468 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.473535 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.473554 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.473580 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.473611 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:04Z","lastTransitionTime":"2026-01-20T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.567968 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:06:49.292791318 +0000 UTC Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.573290 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:04 crc kubenswrapper[4859]: E0120 09:20:04.573466 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.576543 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.576617 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.576635 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.577080 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.577139 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:04Z","lastTransitionTime":"2026-01-20T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.679873 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.679949 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.679965 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.679987 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.680005 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:04Z","lastTransitionTime":"2026-01-20T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.783183 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.783250 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.783266 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.783289 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.783304 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:04Z","lastTransitionTime":"2026-01-20T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.887334 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.887408 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.887429 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.887458 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.887483 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:04Z","lastTransitionTime":"2026-01-20T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.991168 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.991239 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.991257 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.991278 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:04 crc kubenswrapper[4859]: I0120 09:20:04.991295 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:04Z","lastTransitionTime":"2026-01-20T09:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.094000 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.094055 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.094072 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.094109 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.094126 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:05Z","lastTransitionTime":"2026-01-20T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.197849 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.197918 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.197938 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.197961 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.197978 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:05Z","lastTransitionTime":"2026-01-20T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.301658 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.301754 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.301773 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.301834 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.301852 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:05Z","lastTransitionTime":"2026-01-20T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.404392 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.404440 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.404475 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.404493 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.404504 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:05Z","lastTransitionTime":"2026-01-20T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.507112 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.507170 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.507188 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.507211 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.507228 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:05Z","lastTransitionTime":"2026-01-20T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.569076 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:39:46.808954486 +0000 UTC Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.573533 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.573604 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:05 crc kubenswrapper[4859]: E0120 09:20:05.573715 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.573748 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:05 crc kubenswrapper[4859]: E0120 09:20:05.573894 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:05 crc kubenswrapper[4859]: E0120 09:20:05.574036 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.589580 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.606121 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.610726 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.610899 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.610964 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.611038 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.611110 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:05Z","lastTransitionTime":"2026-01-20T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.620981 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.645275 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.655740 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.683047 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.699133 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.713469 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.713530 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.713547 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.713571 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.713589 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:05Z","lastTransitionTime":"2026-01-20T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.730776 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:20:01Z\\\",\\\"message\\\":\\\"GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0120 09:20:01.650755 6879 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0120 09:20:01.652191 6879 ovnkube.go:599] Stopped ovnkube\\\\nI0120 09:20:01.652239 6879 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 09:20:01.652436 6879 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:20:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.748919 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.767854 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.785023 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.799471 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.816509 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.816558 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.816570 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.816592 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.816606 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:05Z","lastTransitionTime":"2026-01-20T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.817688 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84e312e-4e1d-4e36-aac8-ee006d0f8138\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.833567 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.851843 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.869279 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.888844 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:52Z\\\",\\\"message\\\":\\\"2026-01-20T09:19:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa\\\\n2026-01-20T09:19:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa to /host/opt/cni/bin/\\\\n2026-01-20T09:19:06Z [verbose] multus-daemon started\\\\n2026-01-20T09:19:06Z [verbose] Readiness Indicator file check\\\\n2026-01-20T09:19:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.901747 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:05Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.919215 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.919265 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.919278 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.919296 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:05 crc kubenswrapper[4859]: I0120 09:20:05.919309 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:05Z","lastTransitionTime":"2026-01-20T09:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.022052 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.022102 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.022120 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.022138 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.022151 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:06Z","lastTransitionTime":"2026-01-20T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.124960 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.125003 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.125014 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.125031 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.125044 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:06Z","lastTransitionTime":"2026-01-20T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.228354 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.228835 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.228947 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.229059 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.229174 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:06Z","lastTransitionTime":"2026-01-20T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.332537 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.332933 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.333104 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.333279 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.333494 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:06Z","lastTransitionTime":"2026-01-20T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.436875 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.437249 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.437383 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.437531 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.437668 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:06Z","lastTransitionTime":"2026-01-20T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.540717 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.540771 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.540819 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.540842 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.540858 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:06Z","lastTransitionTime":"2026-01-20T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.569508 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:11:31.190922013 +0000 UTC Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.572879 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:06 crc kubenswrapper[4859]: E0120 09:20:06.573058 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.643673 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.643729 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.643746 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.643767 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.643810 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:06Z","lastTransitionTime":"2026-01-20T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.746974 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.747041 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.747058 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.747084 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.747102 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:06Z","lastTransitionTime":"2026-01-20T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.849896 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.849964 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.849982 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.850008 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.850026 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:06Z","lastTransitionTime":"2026-01-20T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.952475 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.952559 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.952582 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.952613 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:06 crc kubenswrapper[4859]: I0120 09:20:06.952636 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:06Z","lastTransitionTime":"2026-01-20T09:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.055434 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.055489 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.055505 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.055529 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.055546 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:07Z","lastTransitionTime":"2026-01-20T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.157952 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.158016 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.158033 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.158057 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.158075 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:07Z","lastTransitionTime":"2026-01-20T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.260338 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.260377 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.260389 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.260406 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.260418 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:07Z","lastTransitionTime":"2026-01-20T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.362921 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.362962 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.362973 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.362987 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.362997 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:07Z","lastTransitionTime":"2026-01-20T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.465838 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.465919 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.465944 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.465979 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.466002 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:07Z","lastTransitionTime":"2026-01-20T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.568705 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.568775 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.568832 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.568862 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.568885 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:07Z","lastTransitionTime":"2026-01-20T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.569859 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:01:07.454116103 +0000 UTC Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.574317 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.574351 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:07 crc kubenswrapper[4859]: E0120 09:20:07.574462 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.574498 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:07 crc kubenswrapper[4859]: E0120 09:20:07.574600 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:07 crc kubenswrapper[4859]: E0120 09:20:07.574689 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.671560 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.671607 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.671656 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.671677 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.671694 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:07Z","lastTransitionTime":"2026-01-20T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.774907 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.775005 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.775021 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.775077 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.775096 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:07Z","lastTransitionTime":"2026-01-20T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.877361 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.877420 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.877436 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.877457 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.877473 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:07Z","lastTransitionTime":"2026-01-20T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.979919 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.979972 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.979989 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.980011 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:07 crc kubenswrapper[4859]: I0120 09:20:07.980029 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:07Z","lastTransitionTime":"2026-01-20T09:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.083269 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.083325 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.083342 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.083364 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.083380 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:08Z","lastTransitionTime":"2026-01-20T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.186175 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.186231 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.186247 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.186310 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.186328 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:08Z","lastTransitionTime":"2026-01-20T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.289907 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.289978 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.289997 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.290022 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.290044 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:08Z","lastTransitionTime":"2026-01-20T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.393375 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.393425 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.393443 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.393465 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.393481 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:08Z","lastTransitionTime":"2026-01-20T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.496519 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.496599 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.496622 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.496653 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.496694 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:08Z","lastTransitionTime":"2026-01-20T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.570960 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:43:10.314020424 +0000 UTC Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.573421 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:08 crc kubenswrapper[4859]: E0120 09:20:08.573669 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.599636 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.599693 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.599710 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.599734 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.599751 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:08Z","lastTransitionTime":"2026-01-20T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.705880 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.705980 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.706015 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.706046 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.706072 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:08Z","lastTransitionTime":"2026-01-20T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.808872 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.808923 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.808932 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.808946 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.808955 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:08Z","lastTransitionTime":"2026-01-20T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.911191 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.911275 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.911292 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.911321 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:08 crc kubenswrapper[4859]: I0120 09:20:08.911343 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:08Z","lastTransitionTime":"2026-01-20T09:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.014563 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.014626 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.014642 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.014666 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.014683 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:09Z","lastTransitionTime":"2026-01-20T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.117524 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.117582 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.117598 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.117620 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.117636 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:09Z","lastTransitionTime":"2026-01-20T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.221112 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.221164 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.221176 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.221198 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.221213 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:09Z","lastTransitionTime":"2026-01-20T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.323905 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.323953 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.323966 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.323982 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.323998 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:09Z","lastTransitionTime":"2026-01-20T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.426809 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.427201 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.427471 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.427661 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.427929 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:09Z","lastTransitionTime":"2026-01-20T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.479723 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.479866 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.479903 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.479931 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.479991 4859 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.480025 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.479989722 +0000 UTC m=+148.236005938 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.480068 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.480054074 +0000 UTC m=+148.236070290 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.480115 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.480156 4859 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.480277 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.480249719 +0000 UTC m=+148.236265925 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.480164 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.480365 4859 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.480453 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.480425964 +0000 UTC m=+148.236442170 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.530833 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.530900 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.530921 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.530942 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.530958 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:09Z","lastTransitionTime":"2026-01-20T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.571734 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:42:19.64103003 +0000 UTC Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.573972 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.574089 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.574272 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.574328 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.574436 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.574484 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.580596 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.580833 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.580883 4859 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.580908 4859 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:20:09 crc kubenswrapper[4859]: E0120 09:20:09.580988 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.580965272 +0000 UTC m=+148.336981498 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.633891 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.634037 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.634118 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.634221 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.634299 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:09Z","lastTransitionTime":"2026-01-20T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.738092 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.738491 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.738509 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.738536 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.738553 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:09Z","lastTransitionTime":"2026-01-20T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.841550 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.841627 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.841647 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.841671 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.841687 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:09Z","lastTransitionTime":"2026-01-20T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.945262 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.945322 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.945340 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.945368 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:09 crc kubenswrapper[4859]: I0120 09:20:09.945385 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:09Z","lastTransitionTime":"2026-01-20T09:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.048201 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.048259 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.048275 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.048297 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.048314 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.150997 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.151044 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.151061 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.151082 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.151098 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.190611 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.190682 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.190708 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.190742 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.190762 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: E0120 09:20:10.214088 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.220174 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.220233 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.220256 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.220280 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.220300 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: E0120 09:20:10.241211 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.246732 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.246968 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.247115 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.247252 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.247407 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: E0120 09:20:10.268531 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.274042 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.274107 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.274125 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.274555 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.274614 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: E0120 09:20:10.293864 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.298723 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.299119 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.299355 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.299645 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.299884 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: E0120 09:20:10.321134 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:10Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:10 crc kubenswrapper[4859]: E0120 09:20:10.321386 4859 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.323878 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.323924 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.323940 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.323963 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.323981 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.426809 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.426861 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.426877 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.427006 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.427025 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.534436 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.534626 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.534648 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.534678 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.534711 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.572239 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:21:17.08339064 +0000 UTC Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.573495 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:10 crc kubenswrapper[4859]: E0120 09:20:10.573682 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.639040 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.639106 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.639134 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.639163 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.639184 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.742995 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.743069 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.743094 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.743122 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.743145 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.846751 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.846908 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.846929 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.846953 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.846971 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.950180 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.950239 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.950253 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.950271 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:10 crc kubenswrapper[4859]: I0120 09:20:10.950287 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:10Z","lastTransitionTime":"2026-01-20T09:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.053163 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.053221 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.053238 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.053268 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.053285 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:11Z","lastTransitionTime":"2026-01-20T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.156697 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.156754 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.156777 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.156843 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.156865 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:11Z","lastTransitionTime":"2026-01-20T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.259944 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.260051 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.260081 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.260111 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.260132 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:11Z","lastTransitionTime":"2026-01-20T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.362695 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.362752 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.362769 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.362827 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.362846 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:11Z","lastTransitionTime":"2026-01-20T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.466234 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.466299 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.466322 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.466350 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.466373 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:11Z","lastTransitionTime":"2026-01-20T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.569056 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.569130 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.569147 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.569171 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.569195 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:11Z","lastTransitionTime":"2026-01-20T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.573214 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:39:30.945597617 +0000 UTC Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.573455 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.573469 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:11 crc kubenswrapper[4859]: E0120 09:20:11.573594 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.573610 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:11 crc kubenswrapper[4859]: E0120 09:20:11.573697 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:11 crc kubenswrapper[4859]: E0120 09:20:11.573852 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.671489 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.671542 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.671561 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.671592 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.671617 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:11Z","lastTransitionTime":"2026-01-20T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.774727 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.774811 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.774828 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.774854 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.774870 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:11Z","lastTransitionTime":"2026-01-20T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.877984 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.878039 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.878059 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.878086 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.878103 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:11Z","lastTransitionTime":"2026-01-20T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.981143 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.981216 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.981239 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.981270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:11 crc kubenswrapper[4859]: I0120 09:20:11.981291 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:11Z","lastTransitionTime":"2026-01-20T09:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.084346 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.084393 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.084405 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.084419 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.084430 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:12Z","lastTransitionTime":"2026-01-20T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.187654 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.187721 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.187738 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.187761 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.187778 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:12Z","lastTransitionTime":"2026-01-20T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.291299 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.291363 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.291380 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.291402 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.291420 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:12Z","lastTransitionTime":"2026-01-20T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.394105 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.394283 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.394361 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.394426 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.394449 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:12Z","lastTransitionTime":"2026-01-20T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.497411 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.497456 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.497468 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.497484 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.497494 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:12Z","lastTransitionTime":"2026-01-20T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.573341 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:45:00.807711369 +0000 UTC Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.573735 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:12 crc kubenswrapper[4859]: E0120 09:20:12.574596 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.600450 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.600539 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.600565 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.600607 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.600632 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:12Z","lastTransitionTime":"2026-01-20T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.704517 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.704583 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.704601 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.704628 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.704647 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:12Z","lastTransitionTime":"2026-01-20T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.807322 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.807364 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.807374 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.807388 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.807399 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:12Z","lastTransitionTime":"2026-01-20T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.910015 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.910061 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.910072 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.910087 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:12 crc kubenswrapper[4859]: I0120 09:20:12.910100 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:12Z","lastTransitionTime":"2026-01-20T09:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.012848 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.012876 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.012885 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.012897 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.012905 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:13Z","lastTransitionTime":"2026-01-20T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.116757 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.116873 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.116892 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.116978 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.116996 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:13Z","lastTransitionTime":"2026-01-20T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.219699 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.219760 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.219805 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.219886 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.219904 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:13Z","lastTransitionTime":"2026-01-20T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.323590 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.323644 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.323661 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.323684 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.323702 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:13Z","lastTransitionTime":"2026-01-20T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.427766 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.427865 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.427905 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.427937 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.427958 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:13Z","lastTransitionTime":"2026-01-20T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.531109 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.531177 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.531194 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.531221 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.531239 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:13Z","lastTransitionTime":"2026-01-20T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.572933 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.572991 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.572997 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:13 crc kubenswrapper[4859]: E0120 09:20:13.573121 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:13 crc kubenswrapper[4859]: E0120 09:20:13.573367 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.573503 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:40:49.449469863 +0000 UTC Jan 20 09:20:13 crc kubenswrapper[4859]: E0120 09:20:13.573657 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.634080 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.634440 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.634570 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.634696 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.634852 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:13Z","lastTransitionTime":"2026-01-20T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.738086 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.738126 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.738142 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.738166 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.738252 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:13Z","lastTransitionTime":"2026-01-20T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.841533 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.841897 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.841985 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.842086 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.842171 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:13Z","lastTransitionTime":"2026-01-20T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.945212 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.945266 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.945286 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.945309 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:13 crc kubenswrapper[4859]: I0120 09:20:13.945326 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:13Z","lastTransitionTime":"2026-01-20T09:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.047894 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.047957 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.047980 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.048011 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.048033 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:14Z","lastTransitionTime":"2026-01-20T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.151250 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.151295 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.151310 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.151329 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.151340 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:14Z","lastTransitionTime":"2026-01-20T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.255502 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.255735 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.255764 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.255817 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.255846 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:14Z","lastTransitionTime":"2026-01-20T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.358413 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.358455 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.358494 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.358510 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.358522 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:14Z","lastTransitionTime":"2026-01-20T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.461114 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.461446 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.461506 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.461575 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.461597 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:14Z","lastTransitionTime":"2026-01-20T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.564912 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.564972 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.564982 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.564998 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.565009 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:14Z","lastTransitionTime":"2026-01-20T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.572736 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:14 crc kubenswrapper[4859]: E0120 09:20:14.572909 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.573715 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:54:55.824686331 +0000 UTC Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.667236 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.667275 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.667286 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.667301 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.667314 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:14Z","lastTransitionTime":"2026-01-20T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.771345 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.771406 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.771426 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.771453 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.771475 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:14Z","lastTransitionTime":"2026-01-20T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.873707 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.873747 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.873755 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.873769 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.873778 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:14Z","lastTransitionTime":"2026-01-20T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.976148 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.976217 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.976241 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.976271 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:14 crc kubenswrapper[4859]: I0120 09:20:14.976291 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:14Z","lastTransitionTime":"2026-01-20T09:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.079821 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.079874 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.079888 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.079907 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.079921 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:15Z","lastTransitionTime":"2026-01-20T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.182227 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.182273 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.182291 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.182313 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.182325 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:15Z","lastTransitionTime":"2026-01-20T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.286160 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.286226 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.286242 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.286267 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.286285 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:15Z","lastTransitionTime":"2026-01-20T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.389164 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.389219 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.389235 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.389257 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.389274 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:15Z","lastTransitionTime":"2026-01-20T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.492910 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.492973 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.492992 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.493016 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.493033 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:15Z","lastTransitionTime":"2026-01-20T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.573702 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.574011 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:15 crc kubenswrapper[4859]: E0120 09:20:15.574251 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.574307 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.574329 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:50:19.882340318 +0000 UTC Jan 20 09:20:15 crc kubenswrapper[4859]: E0120 09:20:15.574756 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:15 crc kubenswrapper[4859]: E0120 09:20:15.574951 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.575270 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:20:15 crc kubenswrapper[4859]: E0120 09:20:15.575514 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.592874 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.597005 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84e312e-4e1d-4e36-aac8-ee006d0f8138\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.599065 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.599135 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.599153 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.599178 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.599196 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:15Z","lastTransitionTime":"2026-01-20T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.618058 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.635630 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.656509 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.679263 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:52Z\\\",\\\"message\\\":\\\"2026-01-20T09:19:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa\\\\n2026-01-20T09:19:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa to /host/opt/cni/bin/\\\\n2026-01-20T09:19:06Z [verbose] multus-daemon started\\\\n2026-01-20T09:19:06Z [verbose] Readiness Indicator file check\\\\n2026-01-20T09:19:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.698639 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.702339 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.702502 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.702520 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.702541 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.702555 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:15Z","lastTransitionTime":"2026-01-20T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.716579 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.737936 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.756175 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.775104 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.799657 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.806600 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.806980 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.807039 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.807072 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.807093 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:15Z","lastTransitionTime":"2026-01-20T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.814858 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.850601 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.872283 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.900099 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:20:01Z\\\",\\\"message\\\":\\\"GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0120 09:20:01.650755 6879 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0120 09:20:01.652191 6879 ovnkube.go:599] Stopped ovnkube\\\\nI0120 09:20:01.652239 6879 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 09:20:01.652436 6879 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:20:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.909856 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.909993 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.910082 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.910180 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.910270 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:15Z","lastTransitionTime":"2026-01-20T09:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.919219 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.937720 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:15 crc kubenswrapper[4859]: I0120 09:20:15.955663 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:15Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.013579 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.013649 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.013673 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.013699 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.013719 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:16Z","lastTransitionTime":"2026-01-20T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.116286 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.116341 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.116358 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.116532 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.116548 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:16Z","lastTransitionTime":"2026-01-20T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.218403 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.218471 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.218494 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.218523 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.218571 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:16Z","lastTransitionTime":"2026-01-20T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.321726 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.321839 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.321864 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.321930 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.321954 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:16Z","lastTransitionTime":"2026-01-20T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.425626 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.425683 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.425705 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.425737 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.425763 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:16Z","lastTransitionTime":"2026-01-20T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.529085 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.529252 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.529278 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.529305 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.529379 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:16Z","lastTransitionTime":"2026-01-20T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.573348 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:16 crc kubenswrapper[4859]: E0120 09:20:16.573548 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.575397 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:47:59.729472133 +0000 UTC Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.633959 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.634029 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.634059 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.634109 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.634136 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:16Z","lastTransitionTime":"2026-01-20T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.737412 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.737479 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.737499 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.737523 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.737544 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:16Z","lastTransitionTime":"2026-01-20T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.839860 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.839936 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.839959 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.839985 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.840003 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:16Z","lastTransitionTime":"2026-01-20T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.943073 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.943137 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.943157 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.943183 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:16 crc kubenswrapper[4859]: I0120 09:20:16.943202 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:16Z","lastTransitionTime":"2026-01-20T09:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.045555 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.045591 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.045600 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.045617 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.045627 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:17Z","lastTransitionTime":"2026-01-20T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.147733 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.148086 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.148174 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.148258 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.148356 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:17Z","lastTransitionTime":"2026-01-20T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.250692 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.250756 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.250774 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.250826 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.250843 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:17Z","lastTransitionTime":"2026-01-20T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.354025 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.354091 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.354108 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.354144 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.354179 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:17Z","lastTransitionTime":"2026-01-20T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.458238 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.458341 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.458359 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.458392 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.458410 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:17Z","lastTransitionTime":"2026-01-20T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.561970 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.562027 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.562046 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.562073 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.562095 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:17Z","lastTransitionTime":"2026-01-20T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.573862 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:17 crc kubenswrapper[4859]: E0120 09:20:17.574034 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.574123 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.574140 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:17 crc kubenswrapper[4859]: E0120 09:20:17.574323 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:17 crc kubenswrapper[4859]: E0120 09:20:17.574422 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.576386 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:24:55.10078912 +0000 UTC Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.665678 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.665747 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.665766 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.665819 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.665839 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:17Z","lastTransitionTime":"2026-01-20T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.769414 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.769482 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.769500 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.769524 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.769541 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:17Z","lastTransitionTime":"2026-01-20T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.872244 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.872309 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.872319 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.872353 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.872367 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:17Z","lastTransitionTime":"2026-01-20T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.976108 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.976163 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.976174 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.976194 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:17 crc kubenswrapper[4859]: I0120 09:20:17.976208 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:17Z","lastTransitionTime":"2026-01-20T09:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.080498 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.080552 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.080569 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.080596 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.080615 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:18Z","lastTransitionTime":"2026-01-20T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.183742 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.183819 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.183839 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.183864 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.183883 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:18Z","lastTransitionTime":"2026-01-20T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.286753 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.286835 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.286852 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.286877 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.286894 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:18Z","lastTransitionTime":"2026-01-20T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.389669 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.389731 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.389748 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.389772 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.389933 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:18Z","lastTransitionTime":"2026-01-20T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.492738 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.492813 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.492824 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.492842 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.492853 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:18Z","lastTransitionTime":"2026-01-20T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.572830 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:18 crc kubenswrapper[4859]: E0120 09:20:18.573454 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.577259 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:52:25.120077894 +0000 UTC Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.596530 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.596601 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.596620 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.596647 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.596668 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:18Z","lastTransitionTime":"2026-01-20T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.699303 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.699351 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.699362 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.699379 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.699391 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:18Z","lastTransitionTime":"2026-01-20T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.802202 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.802272 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.802292 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.802319 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.802335 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:18Z","lastTransitionTime":"2026-01-20T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.905719 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.905768 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.905829 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.905853 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:18 crc kubenswrapper[4859]: I0120 09:20:18.905869 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:18Z","lastTransitionTime":"2026-01-20T09:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.009142 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.009208 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.009234 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.009263 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.009283 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:19Z","lastTransitionTime":"2026-01-20T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.111758 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.111849 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.111868 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.111892 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.111914 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:19Z","lastTransitionTime":"2026-01-20T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.215131 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.215200 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.215218 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.215241 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.215256 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:19Z","lastTransitionTime":"2026-01-20T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.319178 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.319220 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.319235 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.319254 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.319268 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:19Z","lastTransitionTime":"2026-01-20T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.421849 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.421925 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.421946 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.421977 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.422001 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:19Z","lastTransitionTime":"2026-01-20T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.524573 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.524636 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.524652 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.524676 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.524731 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:19Z","lastTransitionTime":"2026-01-20T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.573351 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.573462 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.573358 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:19 crc kubenswrapper[4859]: E0120 09:20:19.573558 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:19 crc kubenswrapper[4859]: E0120 09:20:19.573687 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:19 crc kubenswrapper[4859]: E0120 09:20:19.573869 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.577465 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:55:44.952700203 +0000 UTC Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.627596 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.627670 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.627693 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.627722 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.627744 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:19Z","lastTransitionTime":"2026-01-20T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.730903 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.730977 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.731000 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.731032 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.731056 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:19Z","lastTransitionTime":"2026-01-20T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.834530 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.834586 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.834606 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.834632 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.834649 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:19Z","lastTransitionTime":"2026-01-20T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.937583 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.937690 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.937710 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.937735 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:19 crc kubenswrapper[4859]: I0120 09:20:19.937755 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:19Z","lastTransitionTime":"2026-01-20T09:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.040725 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.040824 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.040846 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.040873 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.040890 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.143593 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.143661 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.143684 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.143713 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.143733 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.246730 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.246832 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.246856 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.246887 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.246908 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.350167 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.350240 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.350264 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.350297 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.350319 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.453561 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.453667 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.453685 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.453746 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.453764 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.556558 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.556604 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.556615 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.556633 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.556643 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.573153 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:20 crc kubenswrapper[4859]: E0120 09:20:20.573282 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.578563 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:12:31.185476069 +0000 UTC Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.658720 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.658824 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.658853 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.658883 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.658906 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.660997 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.661072 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.661095 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.661125 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.661146 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: E0120 09:20:20.682873 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:20Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.688686 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.688735 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.688759 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.688820 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.688845 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: E0120 09:20:20.707994 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:20Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.713325 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.713392 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.713412 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.713437 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.713457 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: E0120 09:20:20.732407 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:20Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.736880 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.736947 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.736969 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.736996 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.737017 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: E0120 09:20:20.757761 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:20Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.763536 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.763619 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.763639 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.763747 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.764138 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: E0120 09:20:20.781052 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:20Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:20 crc kubenswrapper[4859]: E0120 09:20:20.781414 4859 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.783778 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.783858 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.783875 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.783897 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.783913 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.887168 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.887211 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.887224 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.887238 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.887247 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.989706 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.989761 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.989776 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.989816 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:20 crc kubenswrapper[4859]: I0120 09:20:20.989833 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:20Z","lastTransitionTime":"2026-01-20T09:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.092965 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.093027 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.093044 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.093071 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.093090 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:21Z","lastTransitionTime":"2026-01-20T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.196598 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.196636 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.196645 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.196659 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.196668 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:21Z","lastTransitionTime":"2026-01-20T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.298952 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.299049 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.299058 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.299071 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.299080 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:21Z","lastTransitionTime":"2026-01-20T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.401853 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.401908 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.401925 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.401947 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.401965 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:21Z","lastTransitionTime":"2026-01-20T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.504863 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.505354 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.505604 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.505831 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.506071 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:21Z","lastTransitionTime":"2026-01-20T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.573459 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.573498 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.573531 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:21 crc kubenswrapper[4859]: E0120 09:20:21.574206 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:21 crc kubenswrapper[4859]: E0120 09:20:21.574290 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:21 crc kubenswrapper[4859]: E0120 09:20:21.574333 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.579023 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:54:04.234662482 +0000 UTC Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.608519 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.608559 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.608570 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.608586 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.608635 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:21Z","lastTransitionTime":"2026-01-20T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.711205 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.711266 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.711284 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.711312 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.711330 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:21Z","lastTransitionTime":"2026-01-20T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.814106 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.814167 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.814186 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.814210 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.814228 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:21Z","lastTransitionTime":"2026-01-20T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.916748 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.916845 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.916868 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.916899 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:21 crc kubenswrapper[4859]: I0120 09:20:21.916920 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:21Z","lastTransitionTime":"2026-01-20T09:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.019622 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.019685 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.019705 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.019729 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.019747 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:22Z","lastTransitionTime":"2026-01-20T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.123154 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.123232 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.123255 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.123282 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.123306 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:22Z","lastTransitionTime":"2026-01-20T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.226480 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.226559 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.226583 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.226614 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.226637 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:22Z","lastTransitionTime":"2026-01-20T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.329816 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.329869 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.329885 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.329908 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.329926 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:22Z","lastTransitionTime":"2026-01-20T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.433128 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.433214 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.433237 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.433270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.433293 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:22Z","lastTransitionTime":"2026-01-20T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.536144 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.536207 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.536224 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.536246 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.536263 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:22Z","lastTransitionTime":"2026-01-20T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.572696 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:22 crc kubenswrapper[4859]: E0120 09:20:22.572890 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.580147 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:55:49.934813235 +0000 UTC Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.638945 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.639023 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.639048 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.639078 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.639100 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:22Z","lastTransitionTime":"2026-01-20T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.728342 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:22 crc kubenswrapper[4859]: E0120 09:20:22.728535 4859 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:20:22 crc kubenswrapper[4859]: E0120 09:20:22.728606 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs podName:0c059dec-0bda-4110-9050-7cbba39eb183 nodeName:}" failed. No retries permitted until 2026-01-20 09:21:26.728583562 +0000 UTC m=+161.484599778 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs") pod "network-metrics-daemon-tw45n" (UID: "0c059dec-0bda-4110-9050-7cbba39eb183") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.742305 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.742526 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.742982 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.743341 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.743728 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:22Z","lastTransitionTime":"2026-01-20T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.847419 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.847547 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.847568 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.847593 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.847611 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:22Z","lastTransitionTime":"2026-01-20T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.949972 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.950022 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.950038 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.950060 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:22 crc kubenswrapper[4859]: I0120 09:20:22.950078 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:22Z","lastTransitionTime":"2026-01-20T09:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.052528 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.052600 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.052619 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.052643 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.052662 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:23Z","lastTransitionTime":"2026-01-20T09:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.155591 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.155650 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.155668 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.155693 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.155712 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:23Z","lastTransitionTime":"2026-01-20T09:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.258229 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.258274 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.258285 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.258302 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.258315 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:23Z","lastTransitionTime":"2026-01-20T09:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.361414 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.361488 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.361511 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.361540 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.361563 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:23Z","lastTransitionTime":"2026-01-20T09:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.464157 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.464226 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.464244 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.464266 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.464284 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:23Z","lastTransitionTime":"2026-01-20T09:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.567361 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.567428 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.567446 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.567473 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.567495 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:23Z","lastTransitionTime":"2026-01-20T09:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.574353 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.574407 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.574383 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:23 crc kubenswrapper[4859]: E0120 09:20:23.574536 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:23 crc kubenswrapper[4859]: E0120 09:20:23.574671 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:23 crc kubenswrapper[4859]: E0120 09:20:23.574816 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.580520 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:17:06.851648851 +0000 UTC Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.669672 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.669736 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.669760 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.669828 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.669852 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:23Z","lastTransitionTime":"2026-01-20T09:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.772326 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.772390 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.772407 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.772435 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.772453 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:23Z","lastTransitionTime":"2026-01-20T09:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.876599 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.876862 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.876955 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.877034 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.877105 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:23Z","lastTransitionTime":"2026-01-20T09:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.980164 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.980531 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.980720 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.980988 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:23 crc kubenswrapper[4859]: I0120 09:20:23.981179 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:23Z","lastTransitionTime":"2026-01-20T09:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.084575 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.084678 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.084697 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.084723 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.084741 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:24Z","lastTransitionTime":"2026-01-20T09:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.187261 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.187306 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.187323 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.187342 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.187356 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:24Z","lastTransitionTime":"2026-01-20T09:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.290594 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.290659 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.290675 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.290698 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.290716 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:24Z","lastTransitionTime":"2026-01-20T09:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.393276 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.393335 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.393349 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.393370 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.393384 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:24Z","lastTransitionTime":"2026-01-20T09:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.496667 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.496736 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.496757 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.496809 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.496830 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:24Z","lastTransitionTime":"2026-01-20T09:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.572876 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:24 crc kubenswrapper[4859]: E0120 09:20:24.573085 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.582126 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:55:28.934252945 +0000 UTC Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.599392 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.599428 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.599441 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.599456 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.599467 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:24Z","lastTransitionTime":"2026-01-20T09:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.704947 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.705005 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.705024 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.705049 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.705067 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:24Z","lastTransitionTime":"2026-01-20T09:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.808089 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.808152 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.808168 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.808191 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.808208 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:24Z","lastTransitionTime":"2026-01-20T09:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.911382 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.911470 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.911487 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.911509 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:24 crc kubenswrapper[4859]: I0120 09:20:24.911526 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:24Z","lastTransitionTime":"2026-01-20T09:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.014558 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.014630 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.014648 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.014673 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.014690 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:25Z","lastTransitionTime":"2026-01-20T09:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.117843 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.117933 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.117957 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.117983 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.118004 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:25Z","lastTransitionTime":"2026-01-20T09:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.220482 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.220578 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.220601 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.220632 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.220658 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:25Z","lastTransitionTime":"2026-01-20T09:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.322949 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.322994 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.323013 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.323037 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.323053 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:25Z","lastTransitionTime":"2026-01-20T09:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.425634 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.425696 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.425713 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.425740 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.425758 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:25Z","lastTransitionTime":"2026-01-20T09:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.528301 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.528352 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.528365 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.528384 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.528399 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:25Z","lastTransitionTime":"2026-01-20T09:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.573503 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.573573 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.573573 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:25 crc kubenswrapper[4859]: E0120 09:20:25.573691 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:25 crc kubenswrapper[4859]: E0120 09:20:25.573865 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:25 crc kubenswrapper[4859]: E0120 09:20:25.574166 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.582855 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:28:16.176959871 +0000 UTC Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.606444 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfe04730-660d-4e59-8b5e-15e94d72990f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:20:01Z\\\",\\\"message\\\":\\\"GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0120 09:20:01.650755 6879 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0120 09:20:01.652191 6879 ovnkube.go:599] Stopped ovnkube\\\\nI0120 09:20:01.652239 6879 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 09:20:01.652436 6879 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:20:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.630982 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.631020 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.631031 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.631046 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.631057 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:25Z","lastTransitionTime":"2026-01-20T09:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.639120 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4a08e8a-8eff-4991-9f5f-e3036defbe4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965a371775c07ad7b6c1e4d000f42dd5a27119379a44443f6cd4d07bbc26c3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7396e88410f95751a4e8d4b1835b24d14bdb92873c358eebf0245de7b84417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f5ff30e8fbd0d733a09beee046817082e39a0ec47b7a010bc0182a107b4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6fe8178497d87f7410f92cb2bb0cfd162fed56924a720ce78b755ae06b9599b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d208e4a7d42ee2292b9b63499a51036d174b145642d6600620e826c22712be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47aff5fbca67743d4d65244e780fc2288075a10ddee851388d1c7822e525b48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59b3f544c422b713555b4ac7effefc7e7b0918d6dc6e58d645cacaed8b61f016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27f4018e524bdb14b0f7d91dcb04e8c612d24410f322a99480318841a1ad7ad0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.655424 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7ms2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"050ca282-e7f0-494e-a04c-4b74811dccfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80db4949d9cd02568692d6ce5c6ad5322462f9710648cfa2aabfaf3dfe3b0678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5g8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7ms2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.681549 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1638d1e6006c95413a80cde1896495f639dfbbacc6ded11f00d7936f451ee3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab88a49549d18e94f2b980460a21dd09ac82288aca2674c5b764ed17e1e53db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.704828 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6321ebe3-45b9-45a2-b590-72495f7208a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:19:04Z\\\",\\\"message\\\":\\\"file observer\\\\nW0120 09:19:04.138658 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:19:04.138854 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 09:19:04.140642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-862853187/tls.crt::/tmp/serving-cert-862853187/tls.key\\\\\\\"\\\\nI0120 09:19:04.490239 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:19:04.493302 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:19:04.493319 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:19:04.493336 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:19:04.493341 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:19:04.497370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:19:04.497387 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497392 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:19:04.497395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:19:04.497398 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:19:04.497401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:19:04.497404 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:19:04.497590 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:19:04.501720 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.726570 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.733647 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.733709 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.733727 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.733754 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.733772 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:25Z","lastTransitionTime":"2026-01-20T09:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.748586 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b58a740d3a0c7f887b6ff9b386f9e339405c99189b1e03d66251852110070f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.767894 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b432dbbe9e8253e025adeebe39af9cf3abe02d2b24a21dda5e38641580b6976b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.786688 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab032ef-85ae-456c-b5ea-750bc1c32483\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e021c9473238e4c6188b94cf7775eca1597fc9a6870a28be90bdb24fb64d88a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfpv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knvgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.806485 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xqq7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81947dc9-599a-4d35-a9c5-2684294a3afb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T09:19:52Z\\\",\\\"message\\\":\\\"2026-01-20T09:19:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa\\\\n2026-01-20T09:19:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0781d4d0-70b1-4872-a0fb-1efae1c68bfa to /host/opt/cni/bin/\\\\n2026-01-20T09:19:06Z [verbose] multus-daemon started\\\\n2026-01-20T09:19:06Z [verbose] Readiness Indicator file check\\\\n2026-01-20T09:19:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45xbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xqq7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.824064 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e594a558-b805-4f1f-9cfa-a50d02390b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59fda147b0555b7a4cbcd4a56308871ec3fb62bf901b456bc4d45bf9e8bc5da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0abf32c49fe3eeb60c0ef6dfc4481e4bd516e9ea3e14b22edc494a98ee040703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2m7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hdfrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.837130 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.837224 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.837247 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.837270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.837287 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:25Z","lastTransitionTime":"2026-01-20T09:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.839490 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tw45n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c059dec-0bda-4110-9050-7cbba39eb183\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bptsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tw45n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.855158 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dadddbfe-4ec2-418e-a2fc-4274d29bc8e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73f3b63cd5d5957c366dcf2640e091581126bf17931e7d6c276724d66529d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d019ae4f9815cbf4169b04638cee2f36f3e3af96db68bcfa27ead131b6af1c15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d019ae4f9815cbf4169b04638cee2f36f3e3af96db68bcfa27ead131b6af1c15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.873506 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84e312e-4e1d-4e36-aac8-ee006d0f8138\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df83e8315267179acc06d4960bdc972b2247daa348728967c759e388265e41f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bbf5f3af02dbb6965e9c56e164d7b33395c4a2e7ec07a563426c42d533aa0fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e9449907b64185201809a6b6d1289879eee5a43b228005ca45aeef6fed8376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c3d64db6eba9605c65f2ea6da0a6bfdf015b82afeecfec49586773b916cc61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:18:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.891285 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.913638 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab750176-1775-4e98-ba5e-3b7bab1f6f2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470fa40b65391cca1d28657f139c1b03e7ed38a0035ba58ea6741793e79d2358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c281cbc5f56a769009e52a3bc54b7aec09301e49972f53db92a6f9759c0dc66f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f8d1c9ebcbed1a74ef13c2b2161949c482a349d806bba73a2f5cec3c99311ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa976bcdf9b382fb228d87e9bb84168d7ef40eddb299ffe05713d38568b06e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de804dabde45e0f5e3c5d91a67d10081e04bf4dedcc2acba886f0d1ae8fdc845\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a4ac3b93a763788bbc3a762223660e11598b33a617c1a6fb7f5640d9e24aa43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d759807512496e41d9d1465a35e712d4c5edc5829b4bc8e082d33c038f37c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:19:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:19:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wvqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pg7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.930125 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lf9ds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2998e90-0271-4de9-8998-64cf330dafcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7b9102e60929fd07e0802f0c22755463ad3379c89d3332b5e70beb73bc03f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:19:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:19:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lf9ds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.940303 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.940348 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.940365 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.940391 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.940409 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:25Z","lastTransitionTime":"2026-01-20T09:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.950836 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67c1763f-7a24-4b0d-9800-9a0d1056870c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a3800db6dbe2e0fa5ddf8e4051e0851da04e620de7945baaafa9b2971ba20a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2155bf8193ac7adea66392c635ee23a57985ed61bab773d0b21ecadee6924c7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37a38da7437ed921ebca0354061182676f2c94dbc63e0052d7b54472aaea467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:18:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:25 crc kubenswrapper[4859]: I0120 09:20:25.972729 4859 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:19:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:25Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.043613 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.043662 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.043675 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.043692 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.043705 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:26Z","lastTransitionTime":"2026-01-20T09:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.146571 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.146615 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.146628 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.146645 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.146658 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:26Z","lastTransitionTime":"2026-01-20T09:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.249973 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.250473 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.250629 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.250810 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.250954 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:26Z","lastTransitionTime":"2026-01-20T09:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.354541 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.354589 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.354606 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.354628 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.354645 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:26Z","lastTransitionTime":"2026-01-20T09:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.457580 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.457624 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.457637 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.457655 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.457665 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:26Z","lastTransitionTime":"2026-01-20T09:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.559624 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.559653 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.559664 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.559682 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.559694 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:26Z","lastTransitionTime":"2026-01-20T09:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.573502 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:26 crc kubenswrapper[4859]: E0120 09:20:26.573635 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.583754 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:32:27.110711929 +0000 UTC Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.663576 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.663642 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.663665 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.663693 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.663716 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:26Z","lastTransitionTime":"2026-01-20T09:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.767091 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.767190 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.767203 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.767227 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.767245 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:26Z","lastTransitionTime":"2026-01-20T09:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.871301 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.871380 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.871403 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.871435 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.871460 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:26Z","lastTransitionTime":"2026-01-20T09:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.974391 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.974455 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.974467 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.974483 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:26 crc kubenswrapper[4859]: I0120 09:20:26.974494 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:26Z","lastTransitionTime":"2026-01-20T09:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.077564 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.077620 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.077636 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.077657 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.077695 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:27Z","lastTransitionTime":"2026-01-20T09:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.183500 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.183553 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.183564 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.183579 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.183592 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:27Z","lastTransitionTime":"2026-01-20T09:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.286462 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.286550 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.286568 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.286587 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.286603 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:27Z","lastTransitionTime":"2026-01-20T09:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.389858 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.390257 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.390406 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.390599 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.390820 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:27Z","lastTransitionTime":"2026-01-20T09:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.493416 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.493714 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.493962 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.494160 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.494314 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:27Z","lastTransitionTime":"2026-01-20T09:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.573455 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.573473 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.573506 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:27 crc kubenswrapper[4859]: E0120 09:20:27.573967 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:27 crc kubenswrapper[4859]: E0120 09:20:27.574036 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:27 crc kubenswrapper[4859]: E0120 09:20:27.574119 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.584838 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 17:43:05.155184096 +0000 UTC Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.597247 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.597300 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.597318 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.597369 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.597387 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:27Z","lastTransitionTime":"2026-01-20T09:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.700196 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.700271 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.700288 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.700313 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.700331 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:27Z","lastTransitionTime":"2026-01-20T09:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.803459 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.803522 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.803535 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.803556 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.803570 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:27Z","lastTransitionTime":"2026-01-20T09:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.907144 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.907204 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.907223 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.907249 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:27 crc kubenswrapper[4859]: I0120 09:20:27.907268 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:27Z","lastTransitionTime":"2026-01-20T09:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.015993 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.016050 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.016072 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.016097 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.016116 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:28Z","lastTransitionTime":"2026-01-20T09:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.119078 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.119159 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.119182 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.119216 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.119237 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:28Z","lastTransitionTime":"2026-01-20T09:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.223352 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.223421 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.223438 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.223460 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.223476 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:28Z","lastTransitionTime":"2026-01-20T09:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.326808 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.326854 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.326867 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.326883 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.326894 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:28Z","lastTransitionTime":"2026-01-20T09:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.429722 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.429767 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.429795 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.429811 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.429822 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:28Z","lastTransitionTime":"2026-01-20T09:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.533332 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.533387 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.533400 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.533420 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.533434 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:28Z","lastTransitionTime":"2026-01-20T09:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.573294 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:28 crc kubenswrapper[4859]: E0120 09:20:28.573463 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.585817 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:36:32.136260156 +0000 UTC Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.636325 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.636376 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.636393 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.636418 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.636437 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:28Z","lastTransitionTime":"2026-01-20T09:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.739368 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.739434 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.739454 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.739480 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.739502 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:28Z","lastTransitionTime":"2026-01-20T09:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.842588 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.842676 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.842699 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.842730 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.842753 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:28Z","lastTransitionTime":"2026-01-20T09:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.944992 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.945052 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.945070 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.945098 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:28 crc kubenswrapper[4859]: I0120 09:20:28.945116 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:28Z","lastTransitionTime":"2026-01-20T09:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.048558 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.048933 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.049072 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.049212 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.049349 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:29Z","lastTransitionTime":"2026-01-20T09:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.152051 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.152110 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.152126 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.152150 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.152169 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:29Z","lastTransitionTime":"2026-01-20T09:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.256063 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.256160 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.256179 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.256203 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.256219 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:29Z","lastTransitionTime":"2026-01-20T09:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.359388 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.359454 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.359471 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.359500 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.359516 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:29Z","lastTransitionTime":"2026-01-20T09:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.462539 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.462591 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.462608 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.462631 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.462676 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:29Z","lastTransitionTime":"2026-01-20T09:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.565895 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.565977 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.566001 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.566029 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.566052 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:29Z","lastTransitionTime":"2026-01-20T09:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.573538 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.573598 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:29 crc kubenswrapper[4859]: E0120 09:20:29.573708 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.573748 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:29 crc kubenswrapper[4859]: E0120 09:20:29.573987 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:29 crc kubenswrapper[4859]: E0120 09:20:29.574130 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.575180 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:20:29 crc kubenswrapper[4859]: E0120 09:20:29.575453 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.586344 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:04:22.022109764 +0000 UTC Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.669321 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.669380 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.669395 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.669416 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.669435 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:29Z","lastTransitionTime":"2026-01-20T09:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.772340 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.772404 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.772422 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.772448 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.772465 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:29Z","lastTransitionTime":"2026-01-20T09:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.875447 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.875510 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.875529 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.875553 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.875573 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:29Z","lastTransitionTime":"2026-01-20T09:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.978198 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.978254 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.978271 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.978295 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:29 crc kubenswrapper[4859]: I0120 09:20:29.978312 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:29Z","lastTransitionTime":"2026-01-20T09:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.081361 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.081408 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.081424 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.081449 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.081465 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:30Z","lastTransitionTime":"2026-01-20T09:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.184984 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.185027 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.185044 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.185067 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.185086 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:30Z","lastTransitionTime":"2026-01-20T09:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.288662 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.288715 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.288733 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.288756 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.288773 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:30Z","lastTransitionTime":"2026-01-20T09:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.391515 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.391574 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.391592 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.391615 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.391634 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:30Z","lastTransitionTime":"2026-01-20T09:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.494955 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.495016 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.495033 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.495063 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.495086 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:30Z","lastTransitionTime":"2026-01-20T09:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.573266 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:30 crc kubenswrapper[4859]: E0120 09:20:30.573471 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.586691 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:04:22.524449237 +0000 UTC Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.597997 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.598075 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.598098 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.598130 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.598155 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:30Z","lastTransitionTime":"2026-01-20T09:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.700633 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.700703 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.700721 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.700745 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.700762 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:30Z","lastTransitionTime":"2026-01-20T09:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.803897 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.803965 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.803984 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.804016 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.804034 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:30Z","lastTransitionTime":"2026-01-20T09:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.907268 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.907328 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.907345 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.907370 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.907388 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:30Z","lastTransitionTime":"2026-01-20T09:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.982405 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.982491 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.982504 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.982522 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:30 crc kubenswrapper[4859]: I0120 09:20:30.982534 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:30Z","lastTransitionTime":"2026-01-20T09:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: E0120 09:20:31.001708 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:30Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.006421 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.006499 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.006524 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.006555 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.006573 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: E0120 09:20:31.028610 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.034437 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.034538 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.034560 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.034585 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.034604 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: E0120 09:20:31.054442 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.059160 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.059220 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.059235 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.059254 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.059310 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: E0120 09:20:31.077841 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.082181 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.082216 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.082229 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.082247 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.082259 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: E0120 09:20:31.100427 4859 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:20:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3d814a40-acf4-473d-aa01-76b4cff444d5\\\",\\\"systemUUID\\\":\\\"a9c4b411-791a-4e67-b840-f9825626554f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T09:20:31Z is after 2025-08-24T17:21:41Z" Jan 20 09:20:31 crc kubenswrapper[4859]: E0120 09:20:31.100609 4859 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.103044 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.103107 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.103124 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.103150 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.103166 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.206316 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.206702 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.206893 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.207051 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.207191 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.310951 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.311017 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.311037 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.311064 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.311084 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.415001 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.415065 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.415085 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.415109 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.415127 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.518724 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.518862 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.518881 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.518910 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.518928 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.572623 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.572664 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:31 crc kubenswrapper[4859]: E0120 09:20:31.572851 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:31 crc kubenswrapper[4859]: E0120 09:20:31.573029 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.573262 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:31 crc kubenswrapper[4859]: E0120 09:20:31.573350 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.587070 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:25:49.193929973 +0000 UTC Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.621604 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.621632 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.621640 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.621652 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.621660 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.725033 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.725142 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.725214 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.725250 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.725273 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.828206 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.828252 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.828264 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.828280 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.828292 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.931346 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.931397 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.931413 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.931439 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:31 crc kubenswrapper[4859]: I0120 09:20:31.931455 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:31Z","lastTransitionTime":"2026-01-20T09:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.033431 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.033496 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.033512 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.033534 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.033551 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:32Z","lastTransitionTime":"2026-01-20T09:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.135978 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.136037 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.136049 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.136065 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.136077 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:32Z","lastTransitionTime":"2026-01-20T09:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.238957 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.239043 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.239061 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.239083 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.239098 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:32Z","lastTransitionTime":"2026-01-20T09:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.341270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.341310 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.341363 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.341384 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.341395 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:32Z","lastTransitionTime":"2026-01-20T09:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.443918 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.443961 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.443974 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.443991 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.444002 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:32Z","lastTransitionTime":"2026-01-20T09:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.546964 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.547038 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.547049 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.547069 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.547080 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:32Z","lastTransitionTime":"2026-01-20T09:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.573580 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:32 crc kubenswrapper[4859]: E0120 09:20:32.573910 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.587902 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:16:41.738821464 +0000 UTC Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.649534 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.649586 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.649598 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.649617 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.649630 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:32Z","lastTransitionTime":"2026-01-20T09:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.752068 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.752126 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.752142 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.752166 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.752187 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:32Z","lastTransitionTime":"2026-01-20T09:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.856102 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.856169 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.856193 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.856224 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.856244 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:32Z","lastTransitionTime":"2026-01-20T09:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.960070 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.960149 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.960173 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.960197 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:32 crc kubenswrapper[4859]: I0120 09:20:32.960216 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:32Z","lastTransitionTime":"2026-01-20T09:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.063890 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.063937 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.063952 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.063974 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.063990 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:33Z","lastTransitionTime":"2026-01-20T09:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.167172 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.167222 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.167233 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.167258 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.167272 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:33Z","lastTransitionTime":"2026-01-20T09:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.270243 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.270314 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.270341 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.270373 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.270396 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:33Z","lastTransitionTime":"2026-01-20T09:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.373010 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.373335 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.373526 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.373681 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.373873 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:33Z","lastTransitionTime":"2026-01-20T09:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.476027 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.476127 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.476150 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.476182 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.476204 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:33Z","lastTransitionTime":"2026-01-20T09:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.573070 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.573167 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:33 crc kubenswrapper[4859]: E0120 09:20:33.573377 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:33 crc kubenswrapper[4859]: E0120 09:20:33.573585 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.573894 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:33 crc kubenswrapper[4859]: E0120 09:20:33.574310 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.579833 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.579883 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.579902 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.579927 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.579945 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:33Z","lastTransitionTime":"2026-01-20T09:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.589069 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:59:56.884110594 +0000 UTC Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.683583 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.683915 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.684089 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.684242 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.684363 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:33Z","lastTransitionTime":"2026-01-20T09:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.787371 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.787537 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.787566 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.787603 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.787628 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:33Z","lastTransitionTime":"2026-01-20T09:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.891122 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.891187 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.891204 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.891228 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.891245 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:33Z","lastTransitionTime":"2026-01-20T09:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.994245 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.994305 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.994319 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.994343 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:33 crc kubenswrapper[4859]: I0120 09:20:33.994358 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:33Z","lastTransitionTime":"2026-01-20T09:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.097691 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.097760 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.097772 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.097826 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.097846 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:34Z","lastTransitionTime":"2026-01-20T09:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.202051 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.202106 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.202124 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.202148 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.202166 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:34Z","lastTransitionTime":"2026-01-20T09:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.306273 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.306353 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.306378 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.306407 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.306438 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:34Z","lastTransitionTime":"2026-01-20T09:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.409364 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.409492 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.409514 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.409536 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.409589 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:34Z","lastTransitionTime":"2026-01-20T09:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.512611 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.512666 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.512686 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.512709 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.512726 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:34Z","lastTransitionTime":"2026-01-20T09:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.572821 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:34 crc kubenswrapper[4859]: E0120 09:20:34.573052 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.590153 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:06:01.156795644 +0000 UTC Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.615711 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.615757 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.615773 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.615830 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.615850 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:34Z","lastTransitionTime":"2026-01-20T09:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.718402 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.718465 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.718481 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.718505 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.718522 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:34Z","lastTransitionTime":"2026-01-20T09:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.821343 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.821397 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.821419 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.821444 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.821461 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:34Z","lastTransitionTime":"2026-01-20T09:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.925387 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.925473 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.925492 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.925516 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:34 crc kubenswrapper[4859]: I0120 09:20:34.925533 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:34Z","lastTransitionTime":"2026-01-20T09:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.028838 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.028905 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.028923 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.028947 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.028965 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:35Z","lastTransitionTime":"2026-01-20T09:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.132105 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.132158 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.132177 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.132205 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.132227 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:35Z","lastTransitionTime":"2026-01-20T09:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.235752 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.235866 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.235891 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.235922 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.235943 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:35Z","lastTransitionTime":"2026-01-20T09:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.338826 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.338891 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.338911 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.338937 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.338954 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:35Z","lastTransitionTime":"2026-01-20T09:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.442642 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.442690 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.442707 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.442731 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.442748 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:35Z","lastTransitionTime":"2026-01-20T09:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.545447 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.545509 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.545521 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.545539 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.545552 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:35Z","lastTransitionTime":"2026-01-20T09:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.572996 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:35 crc kubenswrapper[4859]: E0120 09:20:35.573145 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.573359 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.573435 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:35 crc kubenswrapper[4859]: E0120 09:20:35.573538 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:35 crc kubenswrapper[4859]: E0120 09:20:35.573716 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.590439 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:38:53.679928666 +0000 UTC Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.634825 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.634761538 podStartE2EDuration="1m30.634761538s" podCreationTimestamp="2026-01-20 09:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.613201017 +0000 UTC m=+110.369217213" watchObservedRunningTime="2026-01-20 09:20:35.634761538 +0000 UTC m=+110.390777754" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.648169 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.648222 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.648232 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.648322 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.648333 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:35Z","lastTransitionTime":"2026-01-20T09:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.690757 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podStartSLOduration=91.690742704 podStartE2EDuration="1m31.690742704s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.666007875 +0000 UTC m=+110.422024061" watchObservedRunningTime="2026-01-20 09:20:35.690742704 +0000 UTC m=+110.446758880" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.706489 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xqq7l" podStartSLOduration=91.706464985 podStartE2EDuration="1m31.706464985s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.691282738 +0000 UTC m=+110.447298914" watchObservedRunningTime="2026-01-20 09:20:35.706464985 +0000 UTC m=+110.462481171" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.718104 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hdfrc" podStartSLOduration=91.718076483 podStartE2EDuration="1m31.718076483s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.706966698 +0000 UTC m=+110.462982884" watchObservedRunningTime="2026-01-20 09:20:35.718076483 +0000 UTC m=+110.474092669" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.733683 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.733664671 podStartE2EDuration="20.733664671s" podCreationTimestamp="2026-01-20 09:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.733352912 +0000 UTC m=+110.489369088" watchObservedRunningTime="2026-01-20 09:20:35.733664671 +0000 UTC m=+110.489680867" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.747086 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.747071958 podStartE2EDuration="58.747071958s" podCreationTimestamp="2026-01-20 09:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.746993536 +0000 UTC m=+110.503009742" watchObservedRunningTime="2026-01-20 09:20:35.747071958 +0000 UTC m=+110.503088134" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.750584 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.750624 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.750633 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.750648 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.750659 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:35Z","lastTransitionTime":"2026-01-20T09:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.781943 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lf9ds" podStartSLOduration=91.781915824 podStartE2EDuration="1m31.781915824s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.781231696 +0000 UTC m=+110.537247872" watchObservedRunningTime="2026-01-20 09:20:35.781915824 +0000 UTC m=+110.537932010" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.816946 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=90.816927135 podStartE2EDuration="1m30.816927135s" podCreationTimestamp="2026-01-20 09:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.803464385 +0000 UTC m=+110.559480571" watchObservedRunningTime="2026-01-20 09:20:35.816927135 +0000 UTC m=+110.572943311" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.841240 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pg7bd" podStartSLOduration=91.84121896 podStartE2EDuration="1m31.84121896s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.841116168 +0000 UTC m=+110.597132344" watchObservedRunningTime="2026-01-20 09:20:35.84121896 +0000 UTC m=+110.597235146" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.852844 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.852876 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.852885 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.852896 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.852905 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:35Z","lastTransitionTime":"2026-01-20T09:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.863499 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.863483411 podStartE2EDuration="1m26.863483411s" podCreationTimestamp="2026-01-20 09:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.863307637 +0000 UTC m=+110.619323843" watchObservedRunningTime="2026-01-20 09:20:35.863483411 +0000 UTC m=+110.619499577" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.873181 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7ms2q" podStartSLOduration=91.873166127 podStartE2EDuration="1m31.873166127s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:35.872486639 +0000 UTC m=+110.628502825" watchObservedRunningTime="2026-01-20 09:20:35.873166127 +0000 UTC m=+110.629182303" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.955521 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.955592 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.955614 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.955641 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:35 crc kubenswrapper[4859]: I0120 09:20:35.955661 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:35Z","lastTransitionTime":"2026-01-20T09:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.058590 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.058720 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.058748 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.058777 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.058835 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:36Z","lastTransitionTime":"2026-01-20T09:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.161616 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.162056 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.162244 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.162460 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.162660 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:36Z","lastTransitionTime":"2026-01-20T09:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.265484 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.265566 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.265589 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.265625 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.265652 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:36Z","lastTransitionTime":"2026-01-20T09:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.368695 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.368755 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.368773 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.368825 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.368843 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:36Z","lastTransitionTime":"2026-01-20T09:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.471960 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.472020 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.472038 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.472061 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.472079 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:36Z","lastTransitionTime":"2026-01-20T09:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.573143 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:36 crc kubenswrapper[4859]: E0120 09:20:36.574060 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.575716 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.575835 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.575862 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.575893 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.575916 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:36Z","lastTransitionTime":"2026-01-20T09:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.590854 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:52:05.177456529 +0000 UTC Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.678282 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.678345 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.678362 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.678387 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.678403 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:36Z","lastTransitionTime":"2026-01-20T09:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.780509 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.780586 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.780614 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.780645 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.780664 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:36Z","lastTransitionTime":"2026-01-20T09:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.883321 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.883385 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.883403 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.883427 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.883446 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:36Z","lastTransitionTime":"2026-01-20T09:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.986859 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.986994 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.987116 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.987209 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:36 crc kubenswrapper[4859]: I0120 09:20:36.987286 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:36Z","lastTransitionTime":"2026-01-20T09:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.090356 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.090462 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.090485 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.090548 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.090567 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:37Z","lastTransitionTime":"2026-01-20T09:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.193209 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.193258 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.193270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.193285 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.193300 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:37Z","lastTransitionTime":"2026-01-20T09:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.296847 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.297184 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.297544 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.297888 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.298256 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:37Z","lastTransitionTime":"2026-01-20T09:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.401999 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.402042 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.402052 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.402067 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.402078 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:37Z","lastTransitionTime":"2026-01-20T09:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.505106 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.505153 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.505165 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.505181 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.505192 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:37Z","lastTransitionTime":"2026-01-20T09:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.574103 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.574216 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.574113 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:37 crc kubenswrapper[4859]: E0120 09:20:37.574319 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:37 crc kubenswrapper[4859]: E0120 09:20:37.574463 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:37 crc kubenswrapper[4859]: E0120 09:20:37.574624 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.591860 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:28:14.413302166 +0000 UTC Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.608863 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.608937 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.608964 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.608995 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.609017 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:37Z","lastTransitionTime":"2026-01-20T09:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.712520 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.712598 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.712621 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.712654 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.712676 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:37Z","lastTransitionTime":"2026-01-20T09:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.815534 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.815601 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.815618 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.815642 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.815660 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:37Z","lastTransitionTime":"2026-01-20T09:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.918985 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.919047 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.919063 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.919086 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:37 crc kubenswrapper[4859]: I0120 09:20:37.919104 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:37Z","lastTransitionTime":"2026-01-20T09:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.022430 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.022471 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.022481 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.022499 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.022509 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:38Z","lastTransitionTime":"2026-01-20T09:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.125375 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.125434 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.125455 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.125483 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.125504 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:38Z","lastTransitionTime":"2026-01-20T09:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.228029 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.228097 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.228115 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.228141 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.228158 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:38Z","lastTransitionTime":"2026-01-20T09:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.331151 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.331224 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.331243 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.331268 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.331287 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:38Z","lastTransitionTime":"2026-01-20T09:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.434704 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.434858 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.434883 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.434916 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.434941 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:38Z","lastTransitionTime":"2026-01-20T09:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.537961 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.538316 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.538333 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.538358 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.538380 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:38Z","lastTransitionTime":"2026-01-20T09:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.573468 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:38 crc kubenswrapper[4859]: E0120 09:20:38.573670 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.593919 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:05:35.367701512 +0000 UTC Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.640944 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.640988 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.640996 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.641013 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.641022 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:38Z","lastTransitionTime":"2026-01-20T09:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.743971 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.744076 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.744095 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.744122 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.744140 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:38Z","lastTransitionTime":"2026-01-20T09:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.846736 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.846817 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.846837 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.846859 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.846876 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:38Z","lastTransitionTime":"2026-01-20T09:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.949924 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.949994 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.950017 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.950047 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:38 crc kubenswrapper[4859]: I0120 09:20:38.950069 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:38Z","lastTransitionTime":"2026-01-20T09:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.053513 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.053583 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.053601 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.053626 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.053645 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:39Z","lastTransitionTime":"2026-01-20T09:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.156217 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.156256 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.156270 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.156284 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.156293 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:39Z","lastTransitionTime":"2026-01-20T09:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.224353 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xqq7l_81947dc9-599a-4d35-a9c5-2684294a3afb/kube-multus/1.log" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.225105 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xqq7l_81947dc9-599a-4d35-a9c5-2684294a3afb/kube-multus/0.log" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.225166 4859 generic.go:334] "Generic (PLEG): container finished" podID="81947dc9-599a-4d35-a9c5-2684294a3afb" containerID="b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499" exitCode=1 Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.225211 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xqq7l" event={"ID":"81947dc9-599a-4d35-a9c5-2684294a3afb","Type":"ContainerDied","Data":"b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.225263 4859 scope.go:117] "RemoveContainer" containerID="c273362b4fe78bc6a444f0b6b065a85f2dc04bf751ae9bc2be9d646cb749e5a4" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.226283 4859 scope.go:117] "RemoveContainer" containerID="b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499" Jan 20 09:20:39 crc kubenswrapper[4859]: E0120 09:20:39.226645 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-xqq7l_openshift-multus(81947dc9-599a-4d35-a9c5-2684294a3afb)\"" pod="openshift-multus/multus-xqq7l" podUID="81947dc9-599a-4d35-a9c5-2684294a3afb" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.258766 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.258811 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.258822 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.258837 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.258849 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:39Z","lastTransitionTime":"2026-01-20T09:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.361911 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.361966 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.361981 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.362001 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.362016 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:39Z","lastTransitionTime":"2026-01-20T09:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.465685 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.465732 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.465749 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.465772 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.465811 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:39Z","lastTransitionTime":"2026-01-20T09:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.568840 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.568918 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.568952 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.568986 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.569010 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:39Z","lastTransitionTime":"2026-01-20T09:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.573559 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:39 crc kubenswrapper[4859]: E0120 09:20:39.573714 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.573808 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.573886 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:39 crc kubenswrapper[4859]: E0120 09:20:39.574015 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:39 crc kubenswrapper[4859]: E0120 09:20:39.574296 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.594130 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:54:23.615568265 +0000 UTC Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.673983 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.674370 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.674610 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.674848 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.675056 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:39Z","lastTransitionTime":"2026-01-20T09:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.777849 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.777897 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.777932 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.777951 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.777964 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:39Z","lastTransitionTime":"2026-01-20T09:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.881176 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.881245 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.881257 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.881335 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.881349 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:39Z","lastTransitionTime":"2026-01-20T09:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.983698 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.984058 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.984195 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.984362 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:39 crc kubenswrapper[4859]: I0120 09:20:39.984491 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:39Z","lastTransitionTime":"2026-01-20T09:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.087049 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.087425 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.087568 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.087702 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.087929 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:40Z","lastTransitionTime":"2026-01-20T09:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.190877 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.190937 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.190954 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.190979 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.190997 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:40Z","lastTransitionTime":"2026-01-20T09:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.229901 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xqq7l_81947dc9-599a-4d35-a9c5-2684294a3afb/kube-multus/1.log" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.293844 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.293895 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.293904 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.293917 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.293925 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:40Z","lastTransitionTime":"2026-01-20T09:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.396600 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.396638 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.396646 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.396659 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.396668 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:40Z","lastTransitionTime":"2026-01-20T09:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.499897 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.499968 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.499994 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.500022 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.500039 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:40Z","lastTransitionTime":"2026-01-20T09:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.573577 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:40 crc kubenswrapper[4859]: E0120 09:20:40.574640 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.575745 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:20:40 crc kubenswrapper[4859]: E0120 09:20:40.576495 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhpfn_openshift-ovn-kubernetes(cfe04730-660d-4e59-8b5e-15e94d72990f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.594589 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:07:11.452616287 +0000 UTC Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.602979 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.603034 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.603052 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.603080 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.603102 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:40Z","lastTransitionTime":"2026-01-20T09:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.706899 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.707004 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.707022 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.707045 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.707063 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:40Z","lastTransitionTime":"2026-01-20T09:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.810462 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.810548 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.810584 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.810620 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.810642 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:40Z","lastTransitionTime":"2026-01-20T09:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.913674 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.913810 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.913824 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.913842 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:40 crc kubenswrapper[4859]: I0120 09:20:40.913854 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:40Z","lastTransitionTime":"2026-01-20T09:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.017274 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.017604 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.017733 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.017902 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.018035 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:41Z","lastTransitionTime":"2026-01-20T09:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.121145 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.121515 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.121660 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.121855 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.121998 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:41Z","lastTransitionTime":"2026-01-20T09:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.225307 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.225635 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.225810 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.225986 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.226127 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:41Z","lastTransitionTime":"2026-01-20T09:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.239649 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.239846 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.239872 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.239902 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.239928 4859 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:20:41Z","lastTransitionTime":"2026-01-20T09:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.307277 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm"] Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.307859 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.310634 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.312967 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.313365 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.313871 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.324654 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5c3ecd16-83c8-405f-ae49-dfa1593077be-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.324702 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5c3ecd16-83c8-405f-ae49-dfa1593077be-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.324739 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c3ecd16-83c8-405f-ae49-dfa1593077be-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.324817 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c3ecd16-83c8-405f-ae49-dfa1593077be-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.324851 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c3ecd16-83c8-405f-ae49-dfa1593077be-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.425527 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5c3ecd16-83c8-405f-ae49-dfa1593077be-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.425619 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5c3ecd16-83c8-405f-ae49-dfa1593077be-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.425669 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5c3ecd16-83c8-405f-ae49-dfa1593077be-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.425689 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c3ecd16-83c8-405f-ae49-dfa1593077be-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.425770 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c3ecd16-83c8-405f-ae49-dfa1593077be-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.425825 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5c3ecd16-83c8-405f-ae49-dfa1593077be-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.425863 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c3ecd16-83c8-405f-ae49-dfa1593077be-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.427570 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c3ecd16-83c8-405f-ae49-dfa1593077be-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.435032 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c3ecd16-83c8-405f-ae49-dfa1593077be-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.450867 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c3ecd16-83c8-405f-ae49-dfa1593077be-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2x6nm\" (UID: \"5c3ecd16-83c8-405f-ae49-dfa1593077be\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.573682 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:41 crc kubenswrapper[4859]: E0120 09:20:41.573910 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.573699 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:41 crc kubenswrapper[4859]: E0120 09:20:41.574030 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.573702 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:41 crc kubenswrapper[4859]: E0120 09:20:41.574122 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.595597 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:21:08.903336467 +0000 UTC Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.595632 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.605581 4859 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 09:20:41 crc kubenswrapper[4859]: I0120 09:20:41.630174 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" Jan 20 09:20:42 crc kubenswrapper[4859]: I0120 09:20:42.238076 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" event={"ID":"5c3ecd16-83c8-405f-ae49-dfa1593077be","Type":"ContainerStarted","Data":"7d48ad715b957a7d25b02e168b1bdbea46fbeaa913e3f4b1847a32a3f0c4a864"} Jan 20 09:20:42 crc kubenswrapper[4859]: I0120 09:20:42.238147 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" event={"ID":"5c3ecd16-83c8-405f-ae49-dfa1593077be","Type":"ContainerStarted","Data":"8d263429506e7d6a2e22b3d2941f92b74840f745a777b8d4b0ae96c5255159c5"} Jan 20 09:20:42 crc kubenswrapper[4859]: I0120 09:20:42.573129 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:42 crc kubenswrapper[4859]: E0120 09:20:42.573364 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:43 crc kubenswrapper[4859]: I0120 09:20:43.573009 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:43 crc kubenswrapper[4859]: I0120 09:20:43.573044 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:43 crc kubenswrapper[4859]: E0120 09:20:43.573173 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:43 crc kubenswrapper[4859]: I0120 09:20:43.573245 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:43 crc kubenswrapper[4859]: E0120 09:20:43.573414 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:43 crc kubenswrapper[4859]: E0120 09:20:43.573551 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:44 crc kubenswrapper[4859]: I0120 09:20:44.573458 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:44 crc kubenswrapper[4859]: E0120 09:20:44.573655 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:45 crc kubenswrapper[4859]: E0120 09:20:45.551137 4859 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 20 09:20:45 crc kubenswrapper[4859]: I0120 09:20:45.572941 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:45 crc kubenswrapper[4859]: I0120 09:20:45.572961 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:45 crc kubenswrapper[4859]: E0120 09:20:45.573671 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:45 crc kubenswrapper[4859]: I0120 09:20:45.573766 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:45 crc kubenswrapper[4859]: E0120 09:20:45.573952 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:45 crc kubenswrapper[4859]: E0120 09:20:45.574075 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:45 crc kubenswrapper[4859]: E0120 09:20:45.664447 4859 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 09:20:46 crc kubenswrapper[4859]: I0120 09:20:46.572920 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:46 crc kubenswrapper[4859]: E0120 09:20:46.573417 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:47 crc kubenswrapper[4859]: I0120 09:20:47.573401 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:47 crc kubenswrapper[4859]: I0120 09:20:47.573481 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:47 crc kubenswrapper[4859]: E0120 09:20:47.573543 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:47 crc kubenswrapper[4859]: E0120 09:20:47.573904 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:47 crc kubenswrapper[4859]: I0120 09:20:47.574107 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:47 crc kubenswrapper[4859]: E0120 09:20:47.574235 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:48 crc kubenswrapper[4859]: I0120 09:20:48.573087 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:48 crc kubenswrapper[4859]: E0120 09:20:48.573316 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:49 crc kubenswrapper[4859]: I0120 09:20:49.573621 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:49 crc kubenswrapper[4859]: I0120 09:20:49.573656 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:49 crc kubenswrapper[4859]: E0120 09:20:49.573833 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:49 crc kubenswrapper[4859]: I0120 09:20:49.573883 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:49 crc kubenswrapper[4859]: E0120 09:20:49.574003 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:49 crc kubenswrapper[4859]: E0120 09:20:49.574202 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:50 crc kubenswrapper[4859]: I0120 09:20:50.572896 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:50 crc kubenswrapper[4859]: E0120 09:20:50.573143 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:50 crc kubenswrapper[4859]: E0120 09:20:50.666079 4859 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 09:20:51 crc kubenswrapper[4859]: I0120 09:20:51.573638 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:51 crc kubenswrapper[4859]: I0120 09:20:51.573825 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:51 crc kubenswrapper[4859]: E0120 09:20:51.573854 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:51 crc kubenswrapper[4859]: I0120 09:20:51.573984 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:51 crc kubenswrapper[4859]: E0120 09:20:51.574171 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:51 crc kubenswrapper[4859]: E0120 09:20:51.574349 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:52 crc kubenswrapper[4859]: I0120 09:20:52.573118 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:52 crc kubenswrapper[4859]: E0120 09:20:52.573246 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:52 crc kubenswrapper[4859]: I0120 09:20:52.573709 4859 scope.go:117] "RemoveContainer" containerID="b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499" Jan 20 09:20:52 crc kubenswrapper[4859]: I0120 09:20:52.603225 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2x6nm" podStartSLOduration=108.6032024 podStartE2EDuration="1m48.6032024s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:42.260547357 +0000 UTC m=+117.016563573" watchObservedRunningTime="2026-01-20 09:20:52.6032024 +0000 UTC m=+127.359218606" Jan 20 09:20:53 crc kubenswrapper[4859]: I0120 09:20:53.277501 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xqq7l_81947dc9-599a-4d35-a9c5-2684294a3afb/kube-multus/1.log" Jan 20 09:20:53 crc kubenswrapper[4859]: I0120 09:20:53.278085 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xqq7l" event={"ID":"81947dc9-599a-4d35-a9c5-2684294a3afb","Type":"ContainerStarted","Data":"7e3aeeb7da6e924263fce786c7d33b083af999b01d213be14aedd92bcff9c96f"} Jan 20 09:20:53 crc kubenswrapper[4859]: I0120 09:20:53.573697 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:53 crc kubenswrapper[4859]: E0120 09:20:53.573875 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:53 crc kubenswrapper[4859]: I0120 09:20:53.574092 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:53 crc kubenswrapper[4859]: E0120 09:20:53.574160 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:53 crc kubenswrapper[4859]: I0120 09:20:53.574214 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:53 crc kubenswrapper[4859]: E0120 09:20:53.574755 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:53 crc kubenswrapper[4859]: I0120 09:20:53.575227 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:20:54 crc kubenswrapper[4859]: I0120 09:20:54.284890 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/3.log" Jan 20 09:20:54 crc kubenswrapper[4859]: I0120 09:20:54.288149 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerStarted","Data":"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289"} Jan 20 09:20:54 crc kubenswrapper[4859]: I0120 09:20:54.289077 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:20:54 crc kubenswrapper[4859]: I0120 09:20:54.397746 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podStartSLOduration=110.397726663 podStartE2EDuration="1m50.397726663s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:20:54.337953603 +0000 UTC m=+129.093969809" watchObservedRunningTime="2026-01-20 09:20:54.397726663 +0000 UTC m=+129.153742839" Jan 20 09:20:54 crc kubenswrapper[4859]: I0120 09:20:54.398425 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tw45n"] Jan 20 09:20:54 crc kubenswrapper[4859]: I0120 09:20:54.398526 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:54 crc kubenswrapper[4859]: E0120 09:20:54.398608 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:55 crc kubenswrapper[4859]: I0120 09:20:55.572731 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:55 crc kubenswrapper[4859]: I0120 09:20:55.572769 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:55 crc kubenswrapper[4859]: E0120 09:20:55.574562 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:55 crc kubenswrapper[4859]: I0120 09:20:55.574648 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:55 crc kubenswrapper[4859]: E0120 09:20:55.574882 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:55 crc kubenswrapper[4859]: E0120 09:20:55.574978 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:55 crc kubenswrapper[4859]: E0120 09:20:55.667085 4859 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 09:20:56 crc kubenswrapper[4859]: I0120 09:20:56.573177 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:56 crc kubenswrapper[4859]: E0120 09:20:56.573347 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:57 crc kubenswrapper[4859]: I0120 09:20:57.573586 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:57 crc kubenswrapper[4859]: I0120 09:20:57.573702 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:57 crc kubenswrapper[4859]: E0120 09:20:57.573917 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:57 crc kubenswrapper[4859]: I0120 09:20:57.573979 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:57 crc kubenswrapper[4859]: E0120 09:20:57.574030 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:57 crc kubenswrapper[4859]: E0120 09:20:57.574168 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:20:58 crc kubenswrapper[4859]: I0120 09:20:58.572736 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:20:58 crc kubenswrapper[4859]: E0120 09:20:58.572986 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:20:59 crc kubenswrapper[4859]: I0120 09:20:59.573493 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:20:59 crc kubenswrapper[4859]: I0120 09:20:59.573544 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:20:59 crc kubenswrapper[4859]: I0120 09:20:59.573691 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:20:59 crc kubenswrapper[4859]: E0120 09:20:59.574275 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 09:20:59 crc kubenswrapper[4859]: E0120 09:20:59.574486 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 09:20:59 crc kubenswrapper[4859]: E0120 09:20:59.574543 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 09:21:00 crc kubenswrapper[4859]: I0120 09:21:00.572903 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:21:00 crc kubenswrapper[4859]: E0120 09:21:00.573167 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tw45n" podUID="0c059dec-0bda-4110-9050-7cbba39eb183" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.573090 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.573189 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.573635 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.575820 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.576263 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.576424 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.576675 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.592678 4859 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.640577 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k6nx9"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.641193 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.644480 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.644859 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.646460 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.646629 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.646742 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.646861 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.646959 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.647058 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.650215 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-62vdh"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.650558 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-45s4x"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.650905 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.653731 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.653984 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.654341 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.654599 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.654713 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.654889 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.655580 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.656040 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.656353 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.658236 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.658358 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.658626 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.658713 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.658855 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.659460 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wfx2f"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.659561 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.659952 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.660041 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.662760 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.663705 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7fk4r"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.664467 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.664556 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.665273 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.666591 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.666811 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.667092 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.667203 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.667855 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.671246 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.671734 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.672108 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.673499 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.673761 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.673822 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.673978 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.681321 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.681534 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.681636 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.681731 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.681857 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.688573 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.688721 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.688860 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.688956 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.689043 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.689128 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.689199 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.689266 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.689392 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.689906 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.691001 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.691884 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.692033 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.692124 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.692211 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.692305 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.692521 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.692856 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.695914 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.696291 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.696587 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.696681 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.696740 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.714159 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.714580 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.719390 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.719534 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.719621 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.719860 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.721406 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.731394 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.731802 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.732205 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.734176 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.735094 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-45s4x"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.735335 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.737263 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.737491 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-7jqnq"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.737742 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.739123 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.740192 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.737868 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.740696 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.742240 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.742669 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.743981 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.744204 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.744238 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5nzb7"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.744614 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxvwj"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.744923 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.745173 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.745662 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.746060 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.747691 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.748579 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749417 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749463 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749523 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749599 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749640 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749742 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749758 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749904 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749910 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749953 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749973 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.750039 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.749926 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.750298 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.750417 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.751521 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.752612 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rdgmg"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.752909 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.752989 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.753018 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.754900 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.756797 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.757354 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.759594 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.760723 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.765601 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.765797 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-nnrsm"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.765858 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.766110 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.766204 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.766237 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.766325 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.767315 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.775382 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.777113 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.777738 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.780572 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nnrsm" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.796274 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hqm8v"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.796961 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.797101 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skms4\" (UniqueName: \"kubernetes.io/projected/1e3a4bb8-24b4-4e23-93f4-90685b01134b-kube-api-access-skms4\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.797237 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.797340 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/feaf2105-331f-4c98-8f11-9680aa0f9330-etcd-client\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.797469 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/feaf2105-331f-4c98-8f11-9680aa0f9330-encryption-config\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.797608 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-audit\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.797728 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.798639 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-client-ca\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.798774 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7fbd020a-2b39-464a-a4af-965d3d5a4de1-encryption-config\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.798913 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-serving-cert\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.799029 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/feaf2105-331f-4c98-8f11-9680aa0f9330-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.799138 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trhwm\" (UniqueName: \"kubernetes.io/projected/99087dbb-8011-483e-87b6-fe5cb4bc203b-kube-api-access-trhwm\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.799232 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55e6a858-5ae7-4d3d-a454-227bf8b52195-images\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.799329 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.799429 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5zvx\" (UniqueName: \"kubernetes.io/projected/18ae7dff-85b9-4b11-be8a-b7afd856ebca-kube-api-access-h5zvx\") pod \"openshift-controller-manager-operator-756b6f6bc6-zj874\" (UID: \"18ae7dff-85b9-4b11-be8a-b7afd856ebca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.798120 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.799599 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99087dbb-8011-483e-87b6-fe5cb4bc203b-serving-cert\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.799696 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzn2\" (UniqueName: \"kubernetes.io/projected/7fbd020a-2b39-464a-a4af-965d3d5a4de1-kube-api-access-4rzn2\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.799834 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dlmm\" (UniqueName: \"kubernetes.io/projected/feaf2105-331f-4c98-8f11-9680aa0f9330-kube-api-access-4dlmm\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.799947 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feaf2105-331f-4c98-8f11-9680aa0f9330-serving-cert\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.800042 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-dir\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.800124 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.800213 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.800304 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fbd020a-2b39-464a-a4af-965d3d5a4de1-node-pullsecrets\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.800402 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.800498 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1e3a4bb8-24b4-4e23-93f4-90685b01134b-machine-approver-tls\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.800582 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-config\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.800669 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzw62\" (UniqueName: \"kubernetes.io/projected/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-kube-api-access-hzw62\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.800755 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.800978 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.801265 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e6a858-5ae7-4d3d-a454-227bf8b52195-config\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.801362 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b827892-45de-41de-ae6a-fc9437a86871-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tlm96\" (UID: \"6b827892-45de-41de-ae6a-fc9437a86871\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.801477 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3a4bb8-24b4-4e23-93f4-90685b01134b-config\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.801565 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.801659 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ae7dff-85b9-4b11-be8a-b7afd856ebca-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zj874\" (UID: \"18ae7dff-85b9-4b11-be8a-b7afd856ebca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.798554 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.797368 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.801774 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-service-ca-bundle\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.799688 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802158 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2n4r\" (UniqueName: \"kubernetes.io/projected/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-kube-api-access-s2n4r\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802197 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-client-ca\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802221 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feaf2105-331f-4c98-8f11-9680aa0f9330-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802250 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnxf8\" (UniqueName: \"kubernetes.io/projected/dd9dab0e-3012-4373-b89c-83d39534771f-kube-api-access-qnxf8\") pod \"cluster-samples-operator-665b6dd947-j5sjq\" (UID: \"dd9dab0e-3012-4373-b89c-83d39534771f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802423 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e3a4bb8-24b4-4e23-93f4-90685b01134b-auth-proxy-config\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802434 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2pj89"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802461 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfxx7\" (UniqueName: \"kubernetes.io/projected/55e6a858-5ae7-4d3d-a454-227bf8b52195-kube-api-access-bfxx7\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802487 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-serving-cert\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802513 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-config\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802536 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-etcd-serving-ca\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802558 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-config\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802578 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fbd020a-2b39-464a-a4af-965d3d5a4de1-serving-cert\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802597 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-config\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802624 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd9dab0e-3012-4373-b89c-83d39534771f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j5sjq\" (UID: \"dd9dab0e-3012-4373-b89c-83d39534771f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802677 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-image-import-ca\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802703 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802757 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802796 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802824 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7fbd020a-2b39-464a-a4af-965d3d5a4de1-etcd-client\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802858 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ae7dff-85b9-4b11-be8a-b7afd856ebca-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zj874\" (UID: \"18ae7dff-85b9-4b11-be8a-b7afd856ebca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802907 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802936 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/55e6a858-5ae7-4d3d-a454-227bf8b52195-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.802957 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fbd020a-2b39-464a-a4af-965d3d5a4de1-audit-dir\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803021 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803052 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wffjq"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803074 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b827892-45de-41de-ae6a-fc9437a86871-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tlm96\" (UID: \"6b827892-45de-41de-ae6a-fc9437a86871\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803097 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwwmq\" (UniqueName: \"kubernetes.io/projected/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-kube-api-access-rwwmq\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803150 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-policies\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803174 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/feaf2105-331f-4c98-8f11-9680aa0f9330-audit-policies\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803194 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/feaf2105-331f-4c98-8f11-9680aa0f9330-audit-dir\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803247 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803279 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhxv5\" (UniqueName: \"kubernetes.io/projected/6b827892-45de-41de-ae6a-fc9437a86871-kube-api-access-jhxv5\") pod \"openshift-apiserver-operator-796bbdcf4f-tlm96\" (UID: \"6b827892-45de-41de-ae6a-fc9437a86871\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803383 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803599 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.803938 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.804425 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.806454 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wfx2f"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.807575 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-62vdh"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.808718 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7fk4r"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.809617 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k6nx9"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.810530 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.811224 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.811647 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.812118 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.812557 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.813135 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.813448 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.816509 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rpkn9"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.816868 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.817002 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.817288 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.817295 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.819828 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-998nl"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.820512 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.820728 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.821062 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.824087 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.824480 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.825114 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.825630 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.826247 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.826616 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.826917 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.827481 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jx6p7"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.828146 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wkjvd"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.828969 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.829492 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.830043 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.831339 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.834518 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.834970 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7jqnq"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.837559 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.839954 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5nzb7"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.843127 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.843355 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.843491 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nnrsm"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.845749 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2pj89"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.849172 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.849218 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hqm8v"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.851084 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.853691 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.853720 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxvwj"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.853946 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.855968 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rdgmg"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.856196 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.857617 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.859627 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tljms"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.875182 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.875234 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.875248 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-998nl"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.875381 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tljms" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.875523 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zbj6d"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.876127 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.880680 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.880715 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.880823 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.881673 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rpkn9"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.883074 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.884522 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.885691 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tljms"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.887087 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.888491 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.889931 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.891291 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.893174 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zbj6d"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.894394 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.894486 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jx6p7"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.895567 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-c4jrg"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.896314 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.896630 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c4jrg"] Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.903942 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ae7dff-85b9-4b11-be8a-b7afd856ebca-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zj874\" (UID: \"18ae7dff-85b9-4b11-be8a-b7afd856ebca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.903978 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7fbd020a-2b39-464a-a4af-965d3d5a4de1-etcd-client\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904010 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904028 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/55e6a858-5ae7-4d3d-a454-227bf8b52195-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904045 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fbd020a-2b39-464a-a4af-965d3d5a4de1-audit-dir\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904064 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904084 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b827892-45de-41de-ae6a-fc9437a86871-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tlm96\" (UID: \"6b827892-45de-41de-ae6a-fc9437a86871\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904103 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwwmq\" (UniqueName: \"kubernetes.io/projected/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-kube-api-access-rwwmq\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904122 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-policies\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904139 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/feaf2105-331f-4c98-8f11-9680aa0f9330-audit-policies\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904155 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/feaf2105-331f-4c98-8f11-9680aa0f9330-audit-dir\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904171 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904188 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhxv5\" (UniqueName: \"kubernetes.io/projected/6b827892-45de-41de-ae6a-fc9437a86871-kube-api-access-jhxv5\") pod \"openshift-apiserver-operator-796bbdcf4f-tlm96\" (UID: \"6b827892-45de-41de-ae6a-fc9437a86871\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904206 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skms4\" (UniqueName: \"kubernetes.io/projected/1e3a4bb8-24b4-4e23-93f4-90685b01134b-kube-api-access-skms4\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904225 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904243 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/feaf2105-331f-4c98-8f11-9680aa0f9330-etcd-client\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904266 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/feaf2105-331f-4c98-8f11-9680aa0f9330-encryption-config\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904281 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-audit\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904300 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-client-ca\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904316 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7fbd020a-2b39-464a-a4af-965d3d5a4de1-encryption-config\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904332 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904349 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-serving-cert\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904367 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/feaf2105-331f-4c98-8f11-9680aa0f9330-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904385 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904406 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trhwm\" (UniqueName: \"kubernetes.io/projected/99087dbb-8011-483e-87b6-fe5cb4bc203b-kube-api-access-trhwm\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904425 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55e6a858-5ae7-4d3d-a454-227bf8b52195-images\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904450 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5zvx\" (UniqueName: \"kubernetes.io/projected/18ae7dff-85b9-4b11-be8a-b7afd856ebca-kube-api-access-h5zvx\") pod \"openshift-controller-manager-operator-756b6f6bc6-zj874\" (UID: \"18ae7dff-85b9-4b11-be8a-b7afd856ebca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904478 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-config\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904505 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99087dbb-8011-483e-87b6-fe5cb4bc203b-serving-cert\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904528 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rzn2\" (UniqueName: \"kubernetes.io/projected/7fbd020a-2b39-464a-a4af-965d3d5a4de1-kube-api-access-4rzn2\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904552 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dlmm\" (UniqueName: \"kubernetes.io/projected/feaf2105-331f-4c98-8f11-9680aa0f9330-kube-api-access-4dlmm\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904576 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feaf2105-331f-4c98-8f11-9680aa0f9330-serving-cert\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904597 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-dir\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904619 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904642 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904661 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fbd020a-2b39-464a-a4af-965d3d5a4de1-node-pullsecrets\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904685 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1e3a4bb8-24b4-4e23-93f4-90685b01134b-machine-approver-tls\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904706 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904728 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-config\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904752 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzw62\" (UniqueName: \"kubernetes.io/projected/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-kube-api-access-hzw62\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904774 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904832 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904858 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e6a858-5ae7-4d3d-a454-227bf8b52195-config\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904882 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b827892-45de-41de-ae6a-fc9437a86871-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tlm96\" (UID: \"6b827892-45de-41de-ae6a-fc9437a86871\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904925 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3a4bb8-24b4-4e23-93f4-90685b01134b-config\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904946 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.904984 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ae7dff-85b9-4b11-be8a-b7afd856ebca-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zj874\" (UID: \"18ae7dff-85b9-4b11-be8a-b7afd856ebca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905005 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-service-ca-bundle\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905028 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2n4r\" (UniqueName: \"kubernetes.io/projected/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-kube-api-access-s2n4r\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905048 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-client-ca\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905069 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feaf2105-331f-4c98-8f11-9680aa0f9330-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905087 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e3a4bb8-24b4-4e23-93f4-90685b01134b-auth-proxy-config\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905105 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnxf8\" (UniqueName: \"kubernetes.io/projected/dd9dab0e-3012-4373-b89c-83d39534771f-kube-api-access-qnxf8\") pod \"cluster-samples-operator-665b6dd947-j5sjq\" (UID: \"dd9dab0e-3012-4373-b89c-83d39534771f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905125 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8f9p\" (UniqueName: \"kubernetes.io/projected/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-kube-api-access-j8f9p\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905143 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfxx7\" (UniqueName: \"kubernetes.io/projected/55e6a858-5ae7-4d3d-a454-227bf8b52195-kube-api-access-bfxx7\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905160 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-config\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905175 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-etcd-serving-ca\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905190 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-serving-cert\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905208 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-serving-cert\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905226 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd9dab0e-3012-4373-b89c-83d39534771f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j5sjq\" (UID: \"dd9dab0e-3012-4373-b89c-83d39534771f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905242 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-config\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905257 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fbd020a-2b39-464a-a4af-965d3d5a4de1-serving-cert\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905277 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-config\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905296 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905317 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905352 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-image-import-ca\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905424 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-audit\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.905480 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fbd020a-2b39-464a-a4af-965d3d5a4de1-audit-dir\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.906451 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/feaf2105-331f-4c98-8f11-9680aa0f9330-audit-dir\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.906476 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.906874 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.906940 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-dir\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.907190 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feaf2105-331f-4c98-8f11-9680aa0f9330-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.907324 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-config\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.907641 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e6a858-5ae7-4d3d-a454-227bf8b52195-config\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.907814 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.907956 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-etcd-serving-ca\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.908289 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-config\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.908332 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ae7dff-85b9-4b11-be8a-b7afd856ebca-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zj874\" (UID: \"18ae7dff-85b9-4b11-be8a-b7afd856ebca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.908979 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-service-ca-bundle\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.909493 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.909698 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/55e6a858-5ae7-4d3d-a454-227bf8b52195-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.909769 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fbd020a-2b39-464a-a4af-965d3d5a4de1-node-pullsecrets\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.909997 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-config\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.910093 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/feaf2105-331f-4c98-8f11-9680aa0f9330-etcd-client\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.910368 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7fbd020a-2b39-464a-a4af-965d3d5a4de1-image-import-ca\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.910524 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55e6a858-5ae7-4d3d-a454-227bf8b52195-images\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.910733 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/feaf2105-331f-4c98-8f11-9680aa0f9330-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.910808 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b827892-45de-41de-ae6a-fc9437a86871-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tlm96\" (UID: \"6b827892-45de-41de-ae6a-fc9437a86871\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.911183 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-policies\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.911244 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/feaf2105-331f-4c98-8f11-9680aa0f9330-encryption-config\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.911448 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-client-ca\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.911492 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3a4bb8-24b4-4e23-93f4-90685b01134b-config\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.911583 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feaf2105-331f-4c98-8f11-9680aa0f9330-serving-cert\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.911811 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e3a4bb8-24b4-4e23-93f4-90685b01134b-auth-proxy-config\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.911903 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-config\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.911980 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7fbd020a-2b39-464a-a4af-965d3d5a4de1-encryption-config\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.912292 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/feaf2105-331f-4c98-8f11-9680aa0f9330-audit-policies\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.912348 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-client-ca\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.912863 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.912891 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.912937 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.913459 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b827892-45de-41de-ae6a-fc9437a86871-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tlm96\" (UID: \"6b827892-45de-41de-ae6a-fc9437a86871\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.914047 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.914275 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99087dbb-8011-483e-87b6-fe5cb4bc203b-serving-cert\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.914520 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-serving-cert\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.914971 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.915001 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.915039 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.915094 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1e3a4bb8-24b4-4e23-93f4-90685b01134b-machine-approver-tls\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.915118 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.915571 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.915705 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-serving-cert\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.916386 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.917119 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7fbd020a-2b39-464a-a4af-965d3d5a4de1-etcd-client\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.917572 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd9dab0e-3012-4373-b89c-83d39534771f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j5sjq\" (UID: \"dd9dab0e-3012-4373-b89c-83d39534771f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.921991 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ae7dff-85b9-4b11-be8a-b7afd856ebca-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zj874\" (UID: \"18ae7dff-85b9-4b11-be8a-b7afd856ebca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.924100 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.925627 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fbd020a-2b39-464a-a4af-965d3d5a4de1-serving-cert\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.935363 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.955094 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.975803 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 09:21:01 crc kubenswrapper[4859]: I0120 09:21:01.995010 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.005940 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-config\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.006103 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8f9p\" (UniqueName: \"kubernetes.io/projected/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-kube-api-access-j8f9p\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.006159 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-serving-cert\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.014942 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.035335 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.055544 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.075939 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.096439 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.115976 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.135516 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.155040 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.185716 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.195048 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.222380 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.236425 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.255202 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.276123 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.296400 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.315707 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.336077 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.356469 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.376031 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.396005 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.416109 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.435887 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.456103 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.475767 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.496384 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.535439 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.556346 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.573384 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.575327 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.595235 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.614875 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.634509 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.655536 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.675338 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.696207 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.715902 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.735562 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.755578 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.776274 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.796605 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.813882 4859 request.go:700] Waited for 1.001426952s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.815075 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.835485 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.856010 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.875733 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.896311 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.916567 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.935812 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.954706 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.989894 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 09:21:02 crc kubenswrapper[4859]: I0120 09:21:02.996135 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 09:21:03 crc kubenswrapper[4859]: E0120 09:21:03.007053 4859 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 20 09:21:03 crc kubenswrapper[4859]: E0120 09:21:03.007071 4859 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 20 09:21:03 crc kubenswrapper[4859]: E0120 09:21:03.007205 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-config podName:4161b7ef-0bb8-47a9-a8f1-61804d03b08d nodeName:}" failed. No retries permitted until 2026-01-20 09:21:03.507163744 +0000 UTC m=+138.263179950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-config") pod "service-ca-operator-777779d784-2mnrr" (UID: "4161b7ef-0bb8-47a9-a8f1-61804d03b08d") : failed to sync configmap cache: timed out waiting for the condition Jan 20 09:21:03 crc kubenswrapper[4859]: E0120 09:21:03.007667 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-serving-cert podName:4161b7ef-0bb8-47a9-a8f1-61804d03b08d nodeName:}" failed. No retries permitted until 2026-01-20 09:21:03.507633387 +0000 UTC m=+138.263649593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-serving-cert") pod "service-ca-operator-777779d784-2mnrr" (UID: "4161b7ef-0bb8-47a9-a8f1-61804d03b08d") : failed to sync secret cache: timed out waiting for the condition Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.015599 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.036221 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.055423 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.076356 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.096903 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.116188 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.135296 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.156261 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.176153 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.196368 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.216469 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.235199 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.256530 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.276484 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.296215 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.315703 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.336316 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.355661 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.375740 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.396364 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.416059 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.435223 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.455472 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.475829 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.497246 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.528068 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-serving-cert\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.528158 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-config\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.528821 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-config\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.532929 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-serving-cert\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.536021 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.555856 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.580753 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.598437 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.615639 4859 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.636150 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.655817 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.675707 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.696281 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.715863 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.757733 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skms4\" (UniqueName: \"kubernetes.io/projected/1e3a4bb8-24b4-4e23-93f4-90685b01134b-kube-api-access-skms4\") pod \"machine-approver-56656f9798-lhhg9\" (UID: \"1e3a4bb8-24b4-4e23-93f4-90685b01134b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.774876 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhxv5\" (UniqueName: \"kubernetes.io/projected/6b827892-45de-41de-ae6a-fc9437a86871-kube-api-access-jhxv5\") pod \"openshift-apiserver-operator-796bbdcf4f-tlm96\" (UID: \"6b827892-45de-41de-ae6a-fc9437a86871\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.797170 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwwmq\" (UniqueName: \"kubernetes.io/projected/ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d-kube-api-access-rwwmq\") pod \"authentication-operator-69f744f599-wfx2f\" (UID: \"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.814512 4859 request.go:700] Waited for 1.907915317s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.818061 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2n4r\" (UniqueName: \"kubernetes.io/projected/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-kube-api-access-s2n4r\") pod \"oauth-openshift-558db77b4-7fk4r\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.838137 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfxx7\" (UniqueName: \"kubernetes.io/projected/55e6a858-5ae7-4d3d-a454-227bf8b52195-kube-api-access-bfxx7\") pod \"machine-api-operator-5694c8668f-45s4x\" (UID: \"55e6a858-5ae7-4d3d-a454-227bf8b52195\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.849755 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.862968 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dlmm\" (UniqueName: \"kubernetes.io/projected/feaf2105-331f-4c98-8f11-9680aa0f9330-kube-api-access-4dlmm\") pod \"apiserver-7bbb656c7d-k2vrh\" (UID: \"feaf2105-331f-4c98-8f11-9680aa0f9330\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.870416 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnxf8\" (UniqueName: \"kubernetes.io/projected/dd9dab0e-3012-4373-b89c-83d39534771f-kube-api-access-qnxf8\") pod \"cluster-samples-operator-665b6dd947-j5sjq\" (UID: \"dd9dab0e-3012-4373-b89c-83d39534771f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.887913 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5zvx\" (UniqueName: \"kubernetes.io/projected/18ae7dff-85b9-4b11-be8a-b7afd856ebca-kube-api-access-h5zvx\") pod \"openshift-controller-manager-operator-756b6f6bc6-zj874\" (UID: \"18ae7dff-85b9-4b11-be8a-b7afd856ebca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.916656 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trhwm\" (UniqueName: \"kubernetes.io/projected/99087dbb-8011-483e-87b6-fe5cb4bc203b-kube-api-access-trhwm\") pod \"route-controller-manager-6576b87f9c-9dw2z\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.925477 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.930111 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rzn2\" (UniqueName: \"kubernetes.io/projected/7fbd020a-2b39-464a-a4af-965d3d5a4de1-kube-api-access-4rzn2\") pod \"apiserver-76f77b778f-k6nx9\" (UID: \"7fbd020a-2b39-464a-a4af-965d3d5a4de1\") " pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.933254 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.934662 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.941664 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.953062 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.959420 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.959611 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzw62\" (UniqueName: \"kubernetes.io/projected/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-kube-api-access-hzw62\") pod \"controller-manager-879f6c89f-62vdh\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.967618 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.976777 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8f9p\" (UniqueName: \"kubernetes.io/projected/4161b7ef-0bb8-47a9-a8f1-61804d03b08d-kube-api-access-j8f9p\") pod \"service-ca-operator-777779d784-2mnrr\" (UID: \"4161b7ef-0bb8-47a9-a8f1-61804d03b08d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:03 crc kubenswrapper[4859]: I0120 09:21:03.981438 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.015330 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.070369 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.096763 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.133547 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168324 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p8zk\" (UniqueName: \"kubernetes.io/projected/f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2-kube-api-access-7p8zk\") pod \"multus-admission-controller-857f4d67dd-hqm8v\" (UID: \"f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168360 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/417685e2-532e-4391-828b-1696f0be8f9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-v9zft\" (UID: \"417685e2-532e-4391-828b-1696f0be8f9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168378 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-trusted-ca\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168401 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldt62\" (UniqueName: \"kubernetes.io/projected/82733214-d1d7-49bd-ae3e-49ffd0c18c6e-kube-api-access-ldt62\") pod \"dns-operator-744455d44c-2pj89\" (UID: \"82733214-d1d7-49bd-ae3e-49ffd0c18c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168427 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e06e843-b10c-4d2e-beb8-45db4f8b0b22-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2mcf\" (UID: \"5e06e843-b10c-4d2e-beb8-45db4f8b0b22\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168442 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9af6cf92-9706-4582-ba03-32fb453da4bf-certs\") pod \"machine-config-server-wkjvd\" (UID: \"9af6cf92-9706-4582-ba03-32fb453da4bf\") " pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168461 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqlbb\" (UniqueName: \"kubernetes.io/projected/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-kube-api-access-rqlbb\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168475 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9jkp\" (UniqueName: \"kubernetes.io/projected/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-kube-api-access-p9jkp\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168497 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e70b9af-669a-4ef2-b830-ed7b6d7e12fb-proxy-tls\") pod \"machine-config-controller-84d6567774-dcnjz\" (UID: \"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168513 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2skp\" (UniqueName: \"kubernetes.io/projected/05988ff3-118b-422f-aa51-ec26acb44fd5-kube-api-access-f2skp\") pod \"catalog-operator-68c6474976-fvvlc\" (UID: \"05988ff3-118b-422f-aa51-ec26acb44fd5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168528 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhngt\" (UID: \"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168543 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvr8x\" (UniqueName: \"kubernetes.io/projected/e382f4ff-2fd6-4e40-8372-3d4871c075ec-kube-api-access-gvr8x\") pod \"olm-operator-6b444d44fb-cckbv\" (UID: \"e382f4ff-2fd6-4e40-8372-3d4871c075ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168568 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7-config\") pod \"kube-controller-manager-operator-78b949d7b-6h4hh\" (UID: \"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168581 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/05988ff3-118b-422f-aa51-ec26acb44fd5-srv-cert\") pod \"catalog-operator-68c6474976-fvvlc\" (UID: \"05988ff3-118b-422f-aa51-ec26acb44fd5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168654 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjw88\" (UniqueName: \"kubernetes.io/projected/ed32b086-da01-429f-a440-1823d9d18e9a-kube-api-access-xjw88\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168696 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e8337b-a5c8-468f-a859-9fa0ba3eb981-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gvtgk\" (UID: \"a8e8337b-a5c8-468f-a859-9fa0ba3eb981\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168728 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22b0a1dc-3c28-4107-8c06-8b2518a35af5-serving-cert\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168750 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e06e843-b10c-4d2e-beb8-45db4f8b0b22-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2mcf\" (UID: \"5e06e843-b10c-4d2e-beb8-45db4f8b0b22\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168765 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6h4hh\" (UID: \"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168795 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvqg\" (UniqueName: \"kubernetes.io/projected/e5c5042b-9158-4a46-b771-19f91eab097f-kube-api-access-sfvqg\") pod \"marketplace-operator-79b997595-rpkn9\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168812 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2vfx\" (UniqueName: \"kubernetes.io/projected/22b0a1dc-3c28-4107-8c06-8b2518a35af5-kube-api-access-l2vfx\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168837 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-tls\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168854 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2vdn\" (UniqueName: \"kubernetes.io/projected/b6433dee-b2ac-4bde-aea5-66d641ecdfa2-kube-api-access-d2vdn\") pod \"service-ca-9c57cc56f-jx6p7\" (UID: \"b6433dee-b2ac-4bde-aea5-66d641ecdfa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168868 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34034d1a-c537-454b-a196-592ec6f2e43f-service-ca-bundle\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168883 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/34034d1a-c537-454b-a196-592ec6f2e43f-stats-auth\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168897 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6h4hh\" (UID: \"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168914 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rpkn9\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168939 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168954 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.168987 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e06e843-b10c-4d2e-beb8-45db4f8b0b22-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2mcf\" (UID: \"5e06e843-b10c-4d2e-beb8-45db4f8b0b22\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169007 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ed32b086-da01-429f-a440-1823d9d18e9a-apiservice-cert\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169043 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqprc\" (UniqueName: \"kubernetes.io/projected/f6e7bf26-160c-4f98-b533-a9433061df3e-kube-api-access-qqprc\") pod \"collect-profiles-29481675-nf2rk\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169057 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-bound-sa-token\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169071 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwnv5\" (UniqueName: \"kubernetes.io/projected/a8e8337b-a5c8-468f-a859-9fa0ba3eb981-kube-api-access-jwnv5\") pod \"package-server-manager-789f6589d5-gvtgk\" (UID: \"a8e8337b-a5c8-468f-a859-9fa0ba3eb981\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169094 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-images\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169109 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-console-config\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169122 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hqm8v\" (UID: \"f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169146 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8bvs\" (UniqueName: \"kubernetes.io/projected/8abb1de6-2a1d-4144-9836-f0ac771b67ce-kube-api-access-z8bvs\") pod \"migrator-59844c95c7-bppzw\" (UID: \"8abb1de6-2a1d-4144-9836-f0ac771b67ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169162 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6e7bf26-160c-4f98-b533-a9433061df3e-secret-volume\") pod \"collect-profiles-29481675-nf2rk\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169194 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85hdj\" (UniqueName: \"kubernetes.io/projected/9af6cf92-9706-4582-ba03-32fb453da4bf-kube-api-access-85hdj\") pod \"machine-config-server-wkjvd\" (UID: \"9af6cf92-9706-4582-ba03-32fb453da4bf\") " pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169218 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22b0a1dc-3c28-4107-8c06-8b2518a35af5-etcd-client\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169250 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169267 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/34034d1a-c537-454b-a196-592ec6f2e43f-default-certificate\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169282 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-service-ca\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169296 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ed32b086-da01-429f-a440-1823d9d18e9a-tmpfs\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169349 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b6433dee-b2ac-4bde-aea5-66d641ecdfa2-signing-key\") pod \"service-ca-9c57cc56f-jx6p7\" (UID: \"b6433dee-b2ac-4bde-aea5-66d641ecdfa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169363 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/82733214-d1d7-49bd-ae3e-49ffd0c18c6e-metrics-tls\") pod \"dns-operator-744455d44c-2pj89\" (UID: \"82733214-d1d7-49bd-ae3e-49ffd0c18c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169379 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-config\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169394 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169411 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvxwk\" (UniqueName: \"kubernetes.io/projected/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-kube-api-access-xvxwk\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169427 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/22b0a1dc-3c28-4107-8c06-8b2518a35af5-etcd-service-ca\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169460 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed32b086-da01-429f-a440-1823d9d18e9a-webhook-cert\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169484 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-758pt\" (UniqueName: \"kubernetes.io/projected/71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0-kube-api-access-758pt\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhngt\" (UID: \"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169498 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6e7bf26-160c-4f98-b533-a9433061df3e-config-volume\") pod \"collect-profiles-29481675-nf2rk\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169514 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq7lf\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-kube-api-access-dq7lf\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169528 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-trusted-ca-bundle\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169554 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e70b9af-669a-4ef2-b830-ed7b6d7e12fb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dcnjz\" (UID: \"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169579 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbksx\" (UniqueName: \"kubernetes.io/projected/34034d1a-c537-454b-a196-592ec6f2e43f-kube-api-access-xbksx\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169594 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hff6\" (UniqueName: \"kubernetes.io/projected/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-kube-api-access-7hff6\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169609 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169624 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmwcv\" (UniqueName: \"kubernetes.io/projected/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-kube-api-access-lmwcv\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169640 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-metrics-tls\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169657 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169671 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b6433dee-b2ac-4bde-aea5-66d641ecdfa2-signing-cabundle\") pod \"service-ca-9c57cc56f-jx6p7\" (UID: \"b6433dee-b2ac-4bde-aea5-66d641ecdfa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169688 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c7c602a-e7f3-42de-a0ab-38e317f8b4ed-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn6p8\" (UID: \"1c7c602a-e7f3-42de-a0ab-38e317f8b4ed\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169725 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5cn\" (UniqueName: \"kubernetes.io/projected/a80a5e3b-eb7a-49f6-a9c5-1860decdfc75-kube-api-access-qx5cn\") pod \"downloads-7954f5f757-nnrsm\" (UID: \"a80a5e3b-eb7a-49f6-a9c5-1860decdfc75\") " pod="openshift-console/downloads-7954f5f757-nnrsm" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169751 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhngt\" (UID: \"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169766 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d189960-186e-471b-b3fa-456854ca6763-serving-cert\") pod \"openshift-config-operator-7777fb866f-w8pvq\" (UID: \"4d189960-186e-471b-b3fa-456854ca6763\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169796 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-trusted-ca\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169810 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-trusted-ca\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169825 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/05988ff3-118b-422f-aa51-ec26acb44fd5-profile-collector-cert\") pod \"catalog-operator-68c6474976-fvvlc\" (UID: \"05988ff3-118b-422f-aa51-ec26acb44fd5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169841 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169855 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/417685e2-532e-4391-828b-1696f0be8f9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-v9zft\" (UID: \"417685e2-532e-4391-828b-1696f0be8f9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169880 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-console-serving-cert\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169893 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/22b0a1dc-3c28-4107-8c06-8b2518a35af5-etcd-ca\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169909 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e382f4ff-2fd6-4e40-8372-3d4871c075ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cckbv\" (UID: \"e382f4ff-2fd6-4e40-8372-3d4871c075ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169944 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34034d1a-c537-454b-a196-592ec6f2e43f-metrics-certs\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169960 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncht2\" (UniqueName: \"kubernetes.io/projected/4d189960-186e-471b-b3fa-456854ca6763-kube-api-access-ncht2\") pod \"openshift-config-operator-7777fb866f-w8pvq\" (UID: \"4d189960-186e-471b-b3fa-456854ca6763\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.169985 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170008 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-certificates\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170022 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rpkn9\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170054 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-serving-cert\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170068 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e382f4ff-2fd6-4e40-8372-3d4871c075ec-srv-cert\") pod \"olm-operator-6b444d44fb-cckbv\" (UID: \"e382f4ff-2fd6-4e40-8372-3d4871c075ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170110 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-proxy-tls\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170135 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-oauth-serving-cert\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170150 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b0a1dc-3c28-4107-8c06-8b2518a35af5-config\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170168 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb4n5\" (UniqueName: \"kubernetes.io/projected/1c7c602a-e7f3-42de-a0ab-38e317f8b4ed-kube-api-access-bb4n5\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn6p8\" (UID: \"1c7c602a-e7f3-42de-a0ab-38e317f8b4ed\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170182 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4d189960-186e-471b-b3fa-456854ca6763-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w8pvq\" (UID: \"4d189960-186e-471b-b3fa-456854ca6763\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170234 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-console-oauth-config\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170250 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9af6cf92-9706-4582-ba03-32fb453da4bf-node-bootstrap-token\") pod \"machine-config-server-wkjvd\" (UID: \"9af6cf92-9706-4582-ba03-32fb453da4bf\") " pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170264 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417685e2-532e-4391-828b-1696f0be8f9d-config\") pod \"kube-apiserver-operator-766d6c64bb-v9zft\" (UID: \"417685e2-532e-4391-828b-1696f0be8f9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.170280 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7zxz\" (UniqueName: \"kubernetes.io/projected/0e70b9af-669a-4ef2-b830-ed7b6d7e12fb-kube-api-access-v7zxz\") pod \"machine-config-controller-84d6567774-dcnjz\" (UID: \"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.172651 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:04.672624655 +0000 UTC m=+139.428640841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.187980 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.206739 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.267692 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.270892 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.271055 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:04.771028796 +0000 UTC m=+139.527044972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.271115 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85hdj\" (UniqueName: \"kubernetes.io/projected/9af6cf92-9706-4582-ba03-32fb453da4bf-kube-api-access-85hdj\") pod \"machine-config-server-wkjvd\" (UID: \"9af6cf92-9706-4582-ba03-32fb453da4bf\") " pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.271160 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22b0a1dc-3c28-4107-8c06-8b2518a35af5-etcd-client\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.271191 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.271216 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/34034d1a-c537-454b-a196-592ec6f2e43f-default-certificate\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.271238 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-service-ca\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.271262 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ed32b086-da01-429f-a440-1823d9d18e9a-tmpfs\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.271287 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39408983-b8b3-4dc3-ab3c-f57031ce7a5d-config-volume\") pod \"dns-default-c4jrg\" (UID: \"39408983-b8b3-4dc3-ab3c-f57031ce7a5d\") " pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.271769 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:04.771751766 +0000 UTC m=+139.527767942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.271707 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ed32b086-da01-429f-a440-1823d9d18e9a-tmpfs\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.271971 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-service-ca\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272114 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b6433dee-b2ac-4bde-aea5-66d641ecdfa2-signing-key\") pod \"service-ca-9c57cc56f-jx6p7\" (UID: \"b6433dee-b2ac-4bde-aea5-66d641ecdfa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272145 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/82733214-d1d7-49bd-ae3e-49ffd0c18c6e-metrics-tls\") pod \"dns-operator-744455d44c-2pj89\" (UID: \"82733214-d1d7-49bd-ae3e-49ffd0c18c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272167 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-config\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272189 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272211 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvxwk\" (UniqueName: \"kubernetes.io/projected/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-kube-api-access-xvxwk\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272232 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/22b0a1dc-3c28-4107-8c06-8b2518a35af5-etcd-service-ca\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272258 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed32b086-da01-429f-a440-1823d9d18e9a-webhook-cert\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272292 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-758pt\" (UniqueName: \"kubernetes.io/projected/71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0-kube-api-access-758pt\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhngt\" (UID: \"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272312 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6e7bf26-160c-4f98-b533-a9433061df3e-config-volume\") pod \"collect-profiles-29481675-nf2rk\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272335 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq7lf\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-kube-api-access-dq7lf\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272713 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-trusted-ca-bundle\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272761 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e70b9af-669a-4ef2-b830-ed7b6d7e12fb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dcnjz\" (UID: \"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272801 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbksx\" (UniqueName: \"kubernetes.io/projected/34034d1a-c537-454b-a196-592ec6f2e43f-kube-api-access-xbksx\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272854 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hff6\" (UniqueName: \"kubernetes.io/projected/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-kube-api-access-7hff6\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272877 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272898 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmwcv\" (UniqueName: \"kubernetes.io/projected/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-kube-api-access-lmwcv\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272918 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-metrics-tls\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272941 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272963 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b6433dee-b2ac-4bde-aea5-66d641ecdfa2-signing-cabundle\") pod \"service-ca-9c57cc56f-jx6p7\" (UID: \"b6433dee-b2ac-4bde-aea5-66d641ecdfa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.272988 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c7c602a-e7f3-42de-a0ab-38e317f8b4ed-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn6p8\" (UID: \"1c7c602a-e7f3-42de-a0ab-38e317f8b4ed\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273035 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5cn\" (UniqueName: \"kubernetes.io/projected/a80a5e3b-eb7a-49f6-a9c5-1860decdfc75-kube-api-access-qx5cn\") pod \"downloads-7954f5f757-nnrsm\" (UID: \"a80a5e3b-eb7a-49f6-a9c5-1860decdfc75\") " pod="openshift-console/downloads-7954f5f757-nnrsm" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273058 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39408983-b8b3-4dc3-ab3c-f57031ce7a5d-metrics-tls\") pod \"dns-default-c4jrg\" (UID: \"39408983-b8b3-4dc3-ab3c-f57031ce7a5d\") " pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273082 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhngt\" (UID: \"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273102 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d189960-186e-471b-b3fa-456854ca6763-serving-cert\") pod \"openshift-config-operator-7777fb866f-w8pvq\" (UID: \"4d189960-186e-471b-b3fa-456854ca6763\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273120 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-trusted-ca\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273141 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-trusted-ca\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273164 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/05988ff3-118b-422f-aa51-ec26acb44fd5-profile-collector-cert\") pod \"catalog-operator-68c6474976-fvvlc\" (UID: \"05988ff3-118b-422f-aa51-ec26acb44fd5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273199 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273223 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/417685e2-532e-4391-828b-1696f0be8f9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-v9zft\" (UID: \"417685e2-532e-4391-828b-1696f0be8f9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273249 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-console-serving-cert\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273268 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/22b0a1dc-3c28-4107-8c06-8b2518a35af5-etcd-ca\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273289 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e382f4ff-2fd6-4e40-8372-3d4871c075ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cckbv\" (UID: \"e382f4ff-2fd6-4e40-8372-3d4871c075ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273312 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-csi-data-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273347 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34034d1a-c537-454b-a196-592ec6f2e43f-metrics-certs\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273352 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/22b0a1dc-3c28-4107-8c06-8b2518a35af5-etcd-service-ca\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273372 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncht2\" (UniqueName: \"kubernetes.io/projected/4d189960-186e-471b-b3fa-456854ca6763-kube-api-access-ncht2\") pod \"openshift-config-operator-7777fb866f-w8pvq\" (UID: \"4d189960-186e-471b-b3fa-456854ca6763\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273398 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273432 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-certificates\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273460 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rpkn9\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273488 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-serving-cert\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273509 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e382f4ff-2fd6-4e40-8372-3d4871c075ec-srv-cert\") pod \"olm-operator-6b444d44fb-cckbv\" (UID: \"e382f4ff-2fd6-4e40-8372-3d4871c075ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273533 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-registration-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273560 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-proxy-tls\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273595 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-oauth-serving-cert\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273618 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b0a1dc-3c28-4107-8c06-8b2518a35af5-config\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273641 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb4n5\" (UniqueName: \"kubernetes.io/projected/1c7c602a-e7f3-42de-a0ab-38e317f8b4ed-kube-api-access-bb4n5\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn6p8\" (UID: \"1c7c602a-e7f3-42de-a0ab-38e317f8b4ed\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273663 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4d189960-186e-471b-b3fa-456854ca6763-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w8pvq\" (UID: \"4d189960-186e-471b-b3fa-456854ca6763\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273690 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-console-oauth-config\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273712 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9af6cf92-9706-4582-ba03-32fb453da4bf-node-bootstrap-token\") pod \"machine-config-server-wkjvd\" (UID: \"9af6cf92-9706-4582-ba03-32fb453da4bf\") " pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273734 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq2wv\" (UniqueName: \"kubernetes.io/projected/39408983-b8b3-4dc3-ab3c-f57031ce7a5d-kube-api-access-vq2wv\") pod \"dns-default-c4jrg\" (UID: \"39408983-b8b3-4dc3-ab3c-f57031ce7a5d\") " pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273757 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417685e2-532e-4391-828b-1696f0be8f9d-config\") pod \"kube-apiserver-operator-766d6c64bb-v9zft\" (UID: \"417685e2-532e-4391-828b-1696f0be8f9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273797 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7zxz\" (UniqueName: \"kubernetes.io/projected/0e70b9af-669a-4ef2-b830-ed7b6d7e12fb-kube-api-access-v7zxz\") pod \"machine-config-controller-84d6567774-dcnjz\" (UID: \"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273839 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p8zk\" (UniqueName: \"kubernetes.io/projected/f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2-kube-api-access-7p8zk\") pod \"multus-admission-controller-857f4d67dd-hqm8v\" (UID: \"f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273866 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-plugins-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273890 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/417685e2-532e-4391-828b-1696f0be8f9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-v9zft\" (UID: \"417685e2-532e-4391-828b-1696f0be8f9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273913 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp7wd\" (UniqueName: \"kubernetes.io/projected/c3eb4647-fc2e-430e-81cd-3f8c1c8bee50-kube-api-access-qp7wd\") pod \"ingress-canary-tljms\" (UID: \"c3eb4647-fc2e-430e-81cd-3f8c1c8bee50\") " pod="openshift-ingress-canary/ingress-canary-tljms" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273935 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-trusted-ca\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273957 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldt62\" (UniqueName: \"kubernetes.io/projected/82733214-d1d7-49bd-ae3e-49ffd0c18c6e-kube-api-access-ldt62\") pod \"dns-operator-744455d44c-2pj89\" (UID: \"82733214-d1d7-49bd-ae3e-49ffd0c18c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.273982 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e06e843-b10c-4d2e-beb8-45db4f8b0b22-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2mcf\" (UID: \"5e06e843-b10c-4d2e-beb8-45db4f8b0b22\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274002 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9af6cf92-9706-4582-ba03-32fb453da4bf-certs\") pod \"machine-config-server-wkjvd\" (UID: \"9af6cf92-9706-4582-ba03-32fb453da4bf\") " pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274024 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqlbb\" (UniqueName: \"kubernetes.io/projected/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-kube-api-access-rqlbb\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274048 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9jkp\" (UniqueName: \"kubernetes.io/projected/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-kube-api-access-p9jkp\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274069 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e70b9af-669a-4ef2-b830-ed7b6d7e12fb-proxy-tls\") pod \"machine-config-controller-84d6567774-dcnjz\" (UID: \"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274089 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2skp\" (UniqueName: \"kubernetes.io/projected/05988ff3-118b-422f-aa51-ec26acb44fd5-kube-api-access-f2skp\") pod \"catalog-operator-68c6474976-fvvlc\" (UID: \"05988ff3-118b-422f-aa51-ec26acb44fd5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274124 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhngt\" (UID: \"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274144 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvr8x\" (UniqueName: \"kubernetes.io/projected/e382f4ff-2fd6-4e40-8372-3d4871c075ec-kube-api-access-gvr8x\") pod \"olm-operator-6b444d44fb-cckbv\" (UID: \"e382f4ff-2fd6-4e40-8372-3d4871c075ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274168 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7-config\") pod \"kube-controller-manager-operator-78b949d7b-6h4hh\" (UID: \"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274188 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/05988ff3-118b-422f-aa51-ec26acb44fd5-srv-cert\") pod \"catalog-operator-68c6474976-fvvlc\" (UID: \"05988ff3-118b-422f-aa51-ec26acb44fd5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274214 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjvjz\" (UniqueName: \"kubernetes.io/projected/89c4974e-874d-414c-9a9b-987b6e9c9a5c-kube-api-access-jjvjz\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274239 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjw88\" (UniqueName: \"kubernetes.io/projected/ed32b086-da01-429f-a440-1823d9d18e9a-kube-api-access-xjw88\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274264 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e8337b-a5c8-468f-a859-9fa0ba3eb981-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gvtgk\" (UID: \"a8e8337b-a5c8-468f-a859-9fa0ba3eb981\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274304 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22b0a1dc-3c28-4107-8c06-8b2518a35af5-serving-cert\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274327 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e06e843-b10c-4d2e-beb8-45db4f8b0b22-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2mcf\" (UID: \"5e06e843-b10c-4d2e-beb8-45db4f8b0b22\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274351 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6h4hh\" (UID: \"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274372 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfvqg\" (UniqueName: \"kubernetes.io/projected/e5c5042b-9158-4a46-b771-19f91eab097f-kube-api-access-sfvqg\") pod \"marketplace-operator-79b997595-rpkn9\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274397 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2vfx\" (UniqueName: \"kubernetes.io/projected/22b0a1dc-3c28-4107-8c06-8b2518a35af5-kube-api-access-l2vfx\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274418 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-tls\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274438 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3eb4647-fc2e-430e-81cd-3f8c1c8bee50-cert\") pod \"ingress-canary-tljms\" (UID: \"c3eb4647-fc2e-430e-81cd-3f8c1c8bee50\") " pod="openshift-ingress-canary/ingress-canary-tljms" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274461 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2vdn\" (UniqueName: \"kubernetes.io/projected/b6433dee-b2ac-4bde-aea5-66d641ecdfa2-kube-api-access-d2vdn\") pod \"service-ca-9c57cc56f-jx6p7\" (UID: \"b6433dee-b2ac-4bde-aea5-66d641ecdfa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274483 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34034d1a-c537-454b-a196-592ec6f2e43f-service-ca-bundle\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274503 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/34034d1a-c537-454b-a196-592ec6f2e43f-stats-auth\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274525 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6h4hh\" (UID: \"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274545 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rpkn9\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274566 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-mountpoint-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274590 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274610 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274632 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e06e843-b10c-4d2e-beb8-45db4f8b0b22-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2mcf\" (UID: \"5e06e843-b10c-4d2e-beb8-45db4f8b0b22\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274655 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ed32b086-da01-429f-a440-1823d9d18e9a-apiservice-cert\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274692 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqprc\" (UniqueName: \"kubernetes.io/projected/f6e7bf26-160c-4f98-b533-a9433061df3e-kube-api-access-qqprc\") pod \"collect-profiles-29481675-nf2rk\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274712 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-bound-sa-token\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274735 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwnv5\" (UniqueName: \"kubernetes.io/projected/a8e8337b-a5c8-468f-a859-9fa0ba3eb981-kube-api-access-jwnv5\") pod \"package-server-manager-789f6589d5-gvtgk\" (UID: \"a8e8337b-a5c8-468f-a859-9fa0ba3eb981\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274769 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-images\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.274877 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.275280 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/34034d1a-c537-454b-a196-592ec6f2e43f-default-certificate\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.276032 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4d189960-186e-471b-b3fa-456854ca6763-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w8pvq\" (UID: \"4d189960-186e-471b-b3fa-456854ca6763\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.276415 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e70b9af-669a-4ef2-b830-ed7b6d7e12fb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dcnjz\" (UID: \"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.276572 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed32b086-da01-429f-a440-1823d9d18e9a-webhook-cert\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.276861 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6e7bf26-160c-4f98-b533-a9433061df3e-config-volume\") pod \"collect-profiles-29481675-nf2rk\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.277361 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b6433dee-b2ac-4bde-aea5-66d641ecdfa2-signing-cabundle\") pod \"service-ca-9c57cc56f-jx6p7\" (UID: \"b6433dee-b2ac-4bde-aea5-66d641ecdfa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.277476 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-console-config\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.277513 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hqm8v\" (UID: \"f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.277541 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-socket-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.277574 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8bvs\" (UniqueName: \"kubernetes.io/projected/8abb1de6-2a1d-4144-9836-f0ac771b67ce-kube-api-access-z8bvs\") pod \"migrator-59844c95c7-bppzw\" (UID: \"8abb1de6-2a1d-4144-9836-f0ac771b67ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.277622 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6e7bf26-160c-4f98-b533-a9433061df3e-secret-volume\") pod \"collect-profiles-29481675-nf2rk\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.277946 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34034d1a-c537-454b-a196-592ec6f2e43f-service-ca-bundle\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.279010 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-trusted-ca\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.279169 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-config\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.279310 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.279648 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.279656 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b6433dee-b2ac-4bde-aea5-66d641ecdfa2-signing-key\") pod \"service-ca-9c57cc56f-jx6p7\" (UID: \"b6433dee-b2ac-4bde-aea5-66d641ecdfa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.279726 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-oauth-serving-cert\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.280716 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b0a1dc-3c28-4107-8c06-8b2518a35af5-config\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.281001 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-trusted-ca-bundle\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.281004 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7-config\") pod \"kube-controller-manager-operator-78b949d7b-6h4hh\" (UID: \"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.285014 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.289756 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhngt\" (UID: \"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.292214 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hqm8v\" (UID: \"f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.292773 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-console-config\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.294487 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/22b0a1dc-3c28-4107-8c06-8b2518a35af5-etcd-ca\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.294582 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e06e843-b10c-4d2e-beb8-45db4f8b0b22-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2mcf\" (UID: \"5e06e843-b10c-4d2e-beb8-45db4f8b0b22\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.295310 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-trusted-ca\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.295485 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417685e2-532e-4391-828b-1696f0be8f9d-config\") pod \"kube-apiserver-operator-766d6c64bb-v9zft\" (UID: \"417685e2-532e-4391-828b-1696f0be8f9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.296654 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-trusted-ca\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.298612 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-certificates\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.299587 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e06e843-b10c-4d2e-beb8-45db4f8b0b22-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2mcf\" (UID: \"5e06e843-b10c-4d2e-beb8-45db4f8b0b22\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.299646 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6h4hh\" (UID: \"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.300010 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-proxy-tls\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.300094 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6e7bf26-160c-4f98-b533-a9433061df3e-secret-volume\") pod \"collect-profiles-29481675-nf2rk\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.300345 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhngt\" (UID: \"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.301240 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/417685e2-532e-4391-828b-1696f0be8f9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-v9zft\" (UID: \"417685e2-532e-4391-828b-1696f0be8f9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.301421 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9af6cf92-9706-4582-ba03-32fb453da4bf-certs\") pod \"machine-config-server-wkjvd\" (UID: \"9af6cf92-9706-4582-ba03-32fb453da4bf\") " pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.301510 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.301703 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-tls\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.302262 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ed32b086-da01-429f-a440-1823d9d18e9a-apiservice-cert\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.302585 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e382f4ff-2fd6-4e40-8372-3d4871c075ec-srv-cert\") pod \"olm-operator-6b444d44fb-cckbv\" (UID: \"e382f4ff-2fd6-4e40-8372-3d4871c075ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.303587 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rpkn9\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.303604 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e70b9af-669a-4ef2-b830-ed7b6d7e12fb-proxy-tls\") pod \"machine-config-controller-84d6567774-dcnjz\" (UID: \"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.304563 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-console-oauth-config\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.304614 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22b0a1dc-3c28-4107-8c06-8b2518a35af5-serving-cert\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.304730 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/05988ff3-118b-422f-aa51-ec26acb44fd5-srv-cert\") pod \"catalog-operator-68c6474976-fvvlc\" (UID: \"05988ff3-118b-422f-aa51-ec26acb44fd5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.304893 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9af6cf92-9706-4582-ba03-32fb453da4bf-node-bootstrap-token\") pod \"machine-config-server-wkjvd\" (UID: \"9af6cf92-9706-4582-ba03-32fb453da4bf\") " pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.304997 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-images\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.309222 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-serving-cert\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.309634 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rpkn9\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.311083 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34034d1a-c537-454b-a196-592ec6f2e43f-metrics-certs\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.311943 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c7c602a-e7f3-42de-a0ab-38e317f8b4ed-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn6p8\" (UID: \"1c7c602a-e7f3-42de-a0ab-38e317f8b4ed\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.312477 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/82733214-d1d7-49bd-ae3e-49ffd0c18c6e-metrics-tls\") pod \"dns-operator-744455d44c-2pj89\" (UID: \"82733214-d1d7-49bd-ae3e-49ffd0c18c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.314866 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/34034d1a-c537-454b-a196-592ec6f2e43f-stats-auth\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.315941 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-console-serving-cert\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.316269 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/05988ff3-118b-422f-aa51-ec26acb44fd5-profile-collector-cert\") pod \"catalog-operator-68c6474976-fvvlc\" (UID: \"05988ff3-118b-422f-aa51-ec26acb44fd5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.316388 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e382f4ff-2fd6-4e40-8372-3d4871c075ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cckbv\" (UID: \"e382f4ff-2fd6-4e40-8372-3d4871c075ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.318691 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-metrics-tls\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.320024 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d189960-186e-471b-b3fa-456854ca6763-serving-cert\") pod \"openshift-config-operator-7777fb866f-w8pvq\" (UID: \"4d189960-186e-471b-b3fa-456854ca6763\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.321179 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e8337b-a5c8-468f-a859-9fa0ba3eb981-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gvtgk\" (UID: \"a8e8337b-a5c8-468f-a859-9fa0ba3eb981\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.322926 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22b0a1dc-3c28-4107-8c06-8b2518a35af5-etcd-client\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.326465 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85hdj\" (UniqueName: \"kubernetes.io/projected/9af6cf92-9706-4582-ba03-32fb453da4bf-kube-api-access-85hdj\") pod \"machine-config-server-wkjvd\" (UID: \"9af6cf92-9706-4582-ba03-32fb453da4bf\") " pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.331023 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" event={"ID":"1e3a4bb8-24b4-4e23-93f4-90685b01134b","Type":"ContainerStarted","Data":"abec4e50767a3600f0f07ff602c78b52e9df14f532f777bc75dfec894892bbdb"} Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.332307 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.334877 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5cn\" (UniqueName: \"kubernetes.io/projected/a80a5e3b-eb7a-49f6-a9c5-1860decdfc75-kube-api-access-qx5cn\") pod \"downloads-7954f5f757-nnrsm\" (UID: \"a80a5e3b-eb7a-49f6-a9c5-1860decdfc75\") " pod="openshift-console/downloads-7954f5f757-nnrsm" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.357942 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hff6\" (UniqueName: \"kubernetes.io/projected/bd9a36ff-f9d5-4694-bf93-8762ec135ca8-kube-api-access-7hff6\") pod \"console-f9d7485db-7jqnq\" (UID: \"bd9a36ff-f9d5-4694-bf93-8762ec135ca8\") " pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.388304 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.388574 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-registration-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.388622 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq2wv\" (UniqueName: \"kubernetes.io/projected/39408983-b8b3-4dc3-ab3c-f57031ce7a5d-kube-api-access-vq2wv\") pod \"dns-default-c4jrg\" (UID: \"39408983-b8b3-4dc3-ab3c-f57031ce7a5d\") " pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.388656 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-plugins-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.388692 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp7wd\" (UniqueName: \"kubernetes.io/projected/c3eb4647-fc2e-430e-81cd-3f8c1c8bee50-kube-api-access-qp7wd\") pod \"ingress-canary-tljms\" (UID: \"c3eb4647-fc2e-430e-81cd-3f8c1c8bee50\") " pod="openshift-ingress-canary/ingress-canary-tljms" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.388760 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjvjz\" (UniqueName: \"kubernetes.io/projected/89c4974e-874d-414c-9a9b-987b6e9c9a5c-kube-api-access-jjvjz\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.388990 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:04.888969766 +0000 UTC m=+139.644985942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.389238 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-registration-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.389285 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3eb4647-fc2e-430e-81cd-3f8c1c8bee50-cert\") pod \"ingress-canary-tljms\" (UID: \"c3eb4647-fc2e-430e-81cd-3f8c1c8bee50\") " pod="openshift-ingress-canary/ingress-canary-tljms" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.389315 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-mountpoint-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.389378 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-socket-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.389414 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.389438 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39408983-b8b3-4dc3-ab3c-f57031ce7a5d-config-volume\") pod \"dns-default-c4jrg\" (UID: \"39408983-b8b3-4dc3-ab3c-f57031ce7a5d\") " pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.389513 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39408983-b8b3-4dc3-ab3c-f57031ce7a5d-metrics-tls\") pod \"dns-default-c4jrg\" (UID: \"39408983-b8b3-4dc3-ab3c-f57031ce7a5d\") " pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.389539 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-csi-data-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.389542 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-plugins-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.390008 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:04.889991153 +0000 UTC m=+139.646007329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.390316 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39408983-b8b3-4dc3-ab3c-f57031ce7a5d-config-volume\") pod \"dns-default-c4jrg\" (UID: \"39408983-b8b3-4dc3-ab3c-f57031ce7a5d\") " pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.390583 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-csi-data-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.390673 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-mountpoint-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.390975 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89c4974e-874d-414c-9a9b-987b6e9c9a5c-socket-dir\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.392725 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nnrsm" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.401862 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3eb4647-fc2e-430e-81cd-3f8c1c8bee50-cert\") pod \"ingress-canary-tljms\" (UID: \"c3eb4647-fc2e-430e-81cd-3f8c1c8bee50\") " pod="openshift-ingress-canary/ingress-canary-tljms" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.404231 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39408983-b8b3-4dc3-ab3c-f57031ce7a5d-metrics-tls\") pod \"dns-default-c4jrg\" (UID: \"39408983-b8b3-4dc3-ab3c-f57031ce7a5d\") " pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.405393 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-758pt\" (UniqueName: \"kubernetes.io/projected/71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0-kube-api-access-758pt\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhngt\" (UID: \"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.412243 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvxwk\" (UniqueName: \"kubernetes.io/projected/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-kube-api-access-xvxwk\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.418189 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.419559 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq7lf\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-kube-api-access-dq7lf\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.431457 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmwcv\" (UniqueName: \"kubernetes.io/projected/d80ee0cc-e67b-4fef-8977-cac3732b5cbf-kube-api-access-lmwcv\") pod \"console-operator-58897d9998-5nzb7\" (UID: \"d80ee0cc-e67b-4fef-8977-cac3732b5cbf\") " pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.451754 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbksx\" (UniqueName: \"kubernetes.io/projected/34034d1a-c537-454b-a196-592ec6f2e43f-kube-api-access-xbksx\") pod \"router-default-5444994796-wffjq\" (UID: \"34034d1a-c537-454b-a196-592ec6f2e43f\") " pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.456738 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7fk4r"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.458501 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.462232 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.490370 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfvqg\" (UniqueName: \"kubernetes.io/projected/e5c5042b-9158-4a46-b771-19f91eab097f-kube-api-access-sfvqg\") pod \"marketplace-operator-79b997595-rpkn9\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.490486 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.490600 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:04.990581975 +0000 UTC m=+139.746598151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.491162 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.491458 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:04.991450429 +0000 UTC m=+139.747466605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.496355 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.509881 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2vfx\" (UniqueName: \"kubernetes.io/projected/22b0a1dc-3c28-4107-8c06-8b2518a35af5-kube-api-access-l2vfx\") pod \"etcd-operator-b45778765-rdgmg\" (UID: \"22b0a1dc-3c28-4107-8c06-8b2518a35af5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.529721 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2vdn\" (UniqueName: \"kubernetes.io/projected/b6433dee-b2ac-4bde-aea5-66d641ecdfa2-kube-api-access-d2vdn\") pod \"service-ca-9c57cc56f-jx6p7\" (UID: \"b6433dee-b2ac-4bde-aea5-66d641ecdfa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.555943 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldt62\" (UniqueName: \"kubernetes.io/projected/82733214-d1d7-49bd-ae3e-49ffd0c18c6e-kube-api-access-ldt62\") pod \"dns-operator-744455d44c-2pj89\" (UID: \"82733214-d1d7-49bd-ae3e-49ffd0c18c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.562310 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-45s4x"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.568342 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wfx2f"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.568813 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e06e843-b10c-4d2e-beb8-45db4f8b0b22-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2mcf\" (UID: \"5e06e843-b10c-4d2e-beb8-45db4f8b0b22\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.576017 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.585065 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wkjvd" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.587796 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6h4hh\" (UID: \"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.592122 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.592232 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.092215786 +0000 UTC m=+139.848231962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.592469 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.592734 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.092726119 +0000 UTC m=+139.848742295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.595736 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.607948 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7zxz\" (UniqueName: \"kubernetes.io/projected/0e70b9af-669a-4ef2-b830-ed7b6d7e12fb-kube-api-access-v7zxz\") pod \"machine-config-controller-84d6567774-dcnjz\" (UID: \"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.632147 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p8zk\" (UniqueName: \"kubernetes.io/projected/f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2-kube-api-access-7p8zk\") pod \"multus-admission-controller-857f4d67dd-hqm8v\" (UID: \"f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.646118 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.649474 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/417685e2-532e-4391-828b-1696f0be8f9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-v9zft\" (UID: \"417685e2-532e-4391-828b-1696f0be8f9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.659926 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.669556 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.672285 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.673346 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k6nx9"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.674619 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwnv5\" (UniqueName: \"kubernetes.io/projected/a8e8337b-a5c8-468f-a859-9fa0ba3eb981-kube-api-access-jwnv5\") pod \"package-server-manager-789f6589d5-gvtgk\" (UID: \"a8e8337b-a5c8-468f-a859-9fa0ba3eb981\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.689875 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.690191 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.693433 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.693617 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.193598029 +0000 UTC m=+139.949614215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.693769 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.694304 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.194278168 +0000 UTC m=+139.950294384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.697994 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.702243 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.708456 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.708765 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqlbb\" (UniqueName: \"kubernetes.io/projected/26f9bf13-ca6f-4939-aba2-10cf819a8f1d-kube-api-access-rqlbb\") pod \"machine-config-operator-74547568cd-998nl\" (UID: \"26f9bf13-ca6f-4939-aba2-10cf819a8f1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.723483 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.733333 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-62vdh"] Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.734128 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb4n5\" (UniqueName: \"kubernetes.io/projected/1c7c602a-e7f3-42de-a0ab-38e317f8b4ed-kube-api-access-bb4n5\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn6p8\" (UID: \"1c7c602a-e7f3-42de-a0ab-38e317f8b4ed\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.752700 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.769406 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8bvs\" (UniqueName: \"kubernetes.io/projected/8abb1de6-2a1d-4144-9836-f0ac771b67ce-kube-api-access-z8bvs\") pod \"migrator-59844c95c7-bppzw\" (UID: \"8abb1de6-2a1d-4144-9836-f0ac771b67ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.785652 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.789273 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqprc\" (UniqueName: \"kubernetes.io/projected/f6e7bf26-160c-4f98-b533-a9433061df3e-kube-api-access-qqprc\") pod \"collect-profiles-29481675-nf2rk\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.795217 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.795480 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.295458565 +0000 UTC m=+140.051474751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.795584 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.796043 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.29602448 +0000 UTC m=+140.052040666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.807942 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.817695 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncht2\" (UniqueName: \"kubernetes.io/projected/4d189960-186e-471b-b3fa-456854ca6763-kube-api-access-ncht2\") pod \"openshift-config-operator-7777fb866f-w8pvq\" (UID: \"4d189960-186e-471b-b3fa-456854ca6763\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.820356 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.829108 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ntdzq\" (UID: \"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.833968 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.849326 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2skp\" (UniqueName: \"kubernetes.io/projected/05988ff3-118b-422f-aa51-ec26acb44fd5-kube-api-access-f2skp\") pod \"catalog-operator-68c6474976-fvvlc\" (UID: \"05988ff3-118b-422f-aa51-ec26acb44fd5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.851239 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.869974 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjw88\" (UniqueName: \"kubernetes.io/projected/ed32b086-da01-429f-a440-1823d9d18e9a-kube-api-access-xjw88\") pod \"packageserver-d55dfcdfc-fvtsl\" (UID: \"ed32b086-da01-429f-a440-1823d9d18e9a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.888265 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-bound-sa-token\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.888672 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.893220 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.896774 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.896892 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.396872789 +0000 UTC m=+140.152888965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.897209 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.897522 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.397514237 +0000 UTC m=+140.153530413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.927550 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq2wv\" (UniqueName: \"kubernetes.io/projected/39408983-b8b3-4dc3-ab3c-f57031ce7a5d-kube-api-access-vq2wv\") pod \"dns-default-c4jrg\" (UID: \"39408983-b8b3-4dc3-ab3c-f57031ce7a5d\") " pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.935321 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.947584 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjvjz\" (UniqueName: \"kubernetes.io/projected/89c4974e-874d-414c-9a9b-987b6e9c9a5c-kube-api-access-jjvjz\") pod \"csi-hostpathplugin-zbj6d\" (UID: \"89c4974e-874d-414c-9a9b-987b6e9c9a5c\") " pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.951982 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.969243 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp7wd\" (UniqueName: \"kubernetes.io/projected/c3eb4647-fc2e-430e-81cd-3f8c1c8bee50-kube-api-access-qp7wd\") pod \"ingress-canary-tljms\" (UID: \"c3eb4647-fc2e-430e-81cd-3f8c1c8bee50\") " pod="openshift-ingress-canary/ingress-canary-tljms" Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.998010 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.998134 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.498105429 +0000 UTC m=+140.254121645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:04 crc kubenswrapper[4859]: I0120 09:21:04.998175 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:04 crc kubenswrapper[4859]: E0120 09:21:04.998649 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.498633893 +0000 UTC m=+140.254650099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.022638 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw" Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.068677 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.099654 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.100001 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.599957884 +0000 UTC m=+140.355974120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.100380 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.100828 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.600805158 +0000 UTC m=+140.356821374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.198855 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tljms" Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.201513 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.202377 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.702351285 +0000 UTC m=+140.458367502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.220181 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvr8x\" (UniqueName: \"kubernetes.io/projected/e382f4ff-2fd6-4e40-8372-3d4871c075ec-kube-api-access-gvr8x\") pod \"olm-operator-6b444d44fb-cckbv\" (UID: \"e382f4ff-2fd6-4e40-8372-3d4871c075ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.229011 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9jkp\" (UniqueName: \"kubernetes.io/projected/f52b74ec-ff44-4ece-9ce7-9d71c781ede6-kube-api-access-p9jkp\") pod \"ingress-operator-5b745b69d9-cqwvl\" (UID: \"f52b74ec-ff44-4ece-9ce7-9d71c781ede6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.236619 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" Jan 20 09:21:05 crc kubenswrapper[4859]: W0120 09:21:05.267877 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4161b7ef_0bb8_47a9_a8f1_61804d03b08d.slice/crio-414c2a6e8130a2cd5ad3dab6c9a0439d6666cdf0ed84b87c894e0ffdf664245c WatchSource:0}: Error finding container 414c2a6e8130a2cd5ad3dab6c9a0439d6666cdf0ed84b87c894e0ffdf664245c: Status 404 returned error can't find the container with id 414c2a6e8130a2cd5ad3dab6c9a0439d6666cdf0ed84b87c894e0ffdf664245c Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.282745 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.303241 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.304035 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.803960435 +0000 UTC m=+140.559976621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.333962 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.335119 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" event={"ID":"99087dbb-8011-483e-87b6-fe5cb4bc203b","Type":"ContainerStarted","Data":"adb9b50dd5eeffd0e1a17004847a548f698a8061bf47dc1b1359c4783ac6fc04"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.336301 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" event={"ID":"6b827892-45de-41de-ae6a-fc9437a86871","Type":"ContainerStarted","Data":"39ddc681a629134991c1cd48f6b7464cf5c7db196c3f9b829d6d9607c8beb836"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.338004 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" event={"ID":"18ae7dff-85b9-4b11-be8a-b7afd856ebca","Type":"ContainerStarted","Data":"9d16f2c9fda3b139f47b8f6baa3f91e53a954a0543101d1fd254fccdb7728849"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.352531 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" event={"ID":"55e6a858-5ae7-4d3d-a454-227bf8b52195","Type":"ContainerStarted","Data":"c5f886738be49fea269a28f2542e0365c6d4c25277da647302572fb5cc82a1f0"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.353749 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" event={"ID":"7fbd020a-2b39-464a-a4af-965d3d5a4de1","Type":"ContainerStarted","Data":"034ed4ed9af913cb11487051a08b2fa7b15b3393eeab2afee9d48b22b88302e5"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.354998 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" event={"ID":"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d","Type":"ContainerStarted","Data":"e44ef768be11891d749fc19507b40fc3a8864d3d0a463b72afd0b3aa8bebaeb2"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.358106 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" event={"ID":"dd9dab0e-3012-4373-b89c-83d39534771f","Type":"ContainerStarted","Data":"b0d00c08cee0d93384bc72daf0cebf4c56696e95024b334865b7305ec0e84d25"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.358976 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" event={"ID":"4161b7ef-0bb8-47a9-a8f1-61804d03b08d","Type":"ContainerStarted","Data":"414c2a6e8130a2cd5ad3dab6c9a0439d6666cdf0ed84b87c894e0ffdf664245c"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.362217 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" event={"ID":"feaf2105-331f-4c98-8f11-9680aa0f9330","Type":"ContainerStarted","Data":"ed57ec6c2886e2e766b09a876225e232d09d454efdae96729ea58d40df5b863b"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.367448 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" event={"ID":"1e3a4bb8-24b4-4e23-93f4-90685b01134b","Type":"ContainerStarted","Data":"bdbe6faf29f9038b9ee08ea3fa069c60856c16a9c63480132c1a924d00351761"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.368865 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" event={"ID":"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9","Type":"ContainerStarted","Data":"be229b3de973a247ee70bb3a7094fdb3f05b6b59a96c4f0f4f94b9feb0c1489f"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.370504 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" event={"ID":"8bc0a7bf-e710-45d8-90df-576a0cbcf06d","Type":"ContainerStarted","Data":"904772a36db62508b522715aa52220f430759c567e5e77cd472d4976a5571f29"} Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.410963 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.411192 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.9111661 +0000 UTC m=+140.667182276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.411524 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.412050 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:05.912035353 +0000 UTC m=+140.668051549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.500735 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc"] Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.512008 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.512195 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.012164722 +0000 UTC m=+140.768180898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.512251 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.512596 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.012584434 +0000 UTC m=+140.768600610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.613446 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.613749 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.11373479 +0000 UTC m=+140.869750966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.693140 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zbj6d"] Jan 20 09:21:05 crc kubenswrapper[4859]: W0120 09:21:05.705816 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05988ff3_118b_422f_aa51_ec26acb44fd5.slice/crio-ad7d4d9ff5f263c3533e5c8fb041e25a2d757f97702d5260f7e7f363954e9df8 WatchSource:0}: Error finding container ad7d4d9ff5f263c3533e5c8fb041e25a2d757f97702d5260f7e7f363954e9df8: Status 404 returned error can't find the container with id ad7d4d9ff5f263c3533e5c8fb041e25a2d757f97702d5260f7e7f363954e9df8 Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.714841 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.715140 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.215128474 +0000 UTC m=+140.971144650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: W0120 09:21:05.716926 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34034d1a_c537_454b_a196_592ec6f2e43f.slice/crio-74ebf3b9fbe63112a72d448e24ee0e531b10330267166f40b7e8cf42f7b65bc0 WatchSource:0}: Error finding container 74ebf3b9fbe63112a72d448e24ee0e531b10330267166f40b7e8cf42f7b65bc0: Status 404 returned error can't find the container with id 74ebf3b9fbe63112a72d448e24ee0e531b10330267166f40b7e8cf42f7b65bc0 Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.717547 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5nzb7"] Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.815709 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.816178 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.316160668 +0000 UTC m=+141.072176844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.832230 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft"] Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.904688 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tljms"] Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.920421 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:05 crc kubenswrapper[4859]: E0120 09:21:05.920772 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.42075671 +0000 UTC m=+141.176772886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:05 crc kubenswrapper[4859]: I0120 09:21:05.936281 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq"] Jan 20 09:21:05 crc kubenswrapper[4859]: W0120 09:21:05.943002 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9af6cf92_9706_4582_ba03_32fb453da4bf.slice/crio-93076aedfded7b0e8165dfcf827e3e198e3fd33b0d713e2f37a8cd483c75e48b WatchSource:0}: Error finding container 93076aedfded7b0e8165dfcf827e3e198e3fd33b0d713e2f37a8cd483c75e48b: Status 404 returned error can't find the container with id 93076aedfded7b0e8165dfcf827e3e198e3fd33b0d713e2f37a8cd483c75e48b Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.021061 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.021368 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.521354331 +0000 UTC m=+141.277370507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.122122 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.122466 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.622450427 +0000 UTC m=+141.378466603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.180135 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8"] Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.222970 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.223218 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.723192663 +0000 UTC m=+141.479208839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.223285 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.223710 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.723701016 +0000 UTC m=+141.479717192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.233557 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2pj89"] Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.245092 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c4jrg"] Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.247317 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt"] Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.251481 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rdgmg"] Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.324826 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.325313 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.825296535 +0000 UTC m=+141.581312721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.389760 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-998nl"] Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.395478 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk"] Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.417437 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl"] Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.422896 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rpkn9"] Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.426515 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.426961 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:06.926943356 +0000 UTC m=+141.682959552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.529197 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.529722 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.029683886 +0000 UTC m=+141.785700102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.631251 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.631634 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.131618815 +0000 UTC m=+141.887635001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.732342 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.732777 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.232759281 +0000 UTC m=+141.988775467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.834129 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.834533 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.334520555 +0000 UTC m=+142.090536741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.934754 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.934984 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.434945973 +0000 UTC m=+142.190962199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:06 crc kubenswrapper[4859]: I0120 09:21:06.935086 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:06 crc kubenswrapper[4859]: E0120 09:21:06.935630 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.43560789 +0000 UTC m=+142.191624096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.036067 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.036506 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.53648843 +0000 UTC m=+142.292504606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.137823 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.138183 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.638171002 +0000 UTC m=+142.394187178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.239218 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.239392 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.73936917 +0000 UTC m=+142.495385356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.239464 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.239760 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.739750291 +0000 UTC m=+142.495766467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.340604 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.341118 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.841033131 +0000 UTC m=+142.597049317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.341169 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.341554 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.841543845 +0000 UTC m=+142.597560031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.442528 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.442614 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.942595139 +0000 UTC m=+142.698611315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.443022 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.443318 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:07.943307989 +0000 UTC m=+142.699324165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.543873 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.544036 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.044007724 +0000 UTC m=+142.800023900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.544106 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.544392 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.044384054 +0000 UTC m=+142.800400230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.645279 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.645934 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.145899201 +0000 UTC m=+142.901915477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.747461 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.748044 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.248021705 +0000 UTC m=+143.004037921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.849994 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.850330 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.350287402 +0000 UTC m=+143.106303618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.850418 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.851151 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.351120935 +0000 UTC m=+143.107137141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.953451 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:07 crc kubenswrapper[4859]: E0120 09:21:07.953820 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.453801645 +0000 UTC m=+143.209817811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.954903 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh"] Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.955207 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz"] Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.955225 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" event={"ID":"05988ff3-118b-422f-aa51-ec26acb44fd5","Type":"ContainerStarted","Data":"ad7d4d9ff5f263c3533e5c8fb041e25a2d757f97702d5260f7e7f363954e9df8"} Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.955249 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" event={"ID":"89c4974e-874d-414c-9a9b-987b6e9c9a5c","Type":"ContainerStarted","Data":"322873e8a0e0ab994804427b41aa6f4c853e9c5dc05e8315b06dbd87e231b7c2"} Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.955263 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wkjvd" event={"ID":"9af6cf92-9706-4582-ba03-32fb453da4bf","Type":"ContainerStarted","Data":"93076aedfded7b0e8165dfcf827e3e198e3fd33b0d713e2f37a8cd483c75e48b"} Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.955301 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5nzb7" event={"ID":"d80ee0cc-e67b-4fef-8977-cac3732b5cbf","Type":"ContainerStarted","Data":"a1edcc5ff646b4ec2f93d5cf4fc2ccc247ba0877f21c12ae47e27265feab6f36"} Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.955316 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wffjq" event={"ID":"34034d1a-c537-454b-a196-592ec6f2e43f","Type":"ContainerStarted","Data":"74ebf3b9fbe63112a72d448e24ee0e531b10330267166f40b7e8cf42f7b65bc0"} Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.955329 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" event={"ID":"417685e2-532e-4391-828b-1696f0be8f9d","Type":"ContainerStarted","Data":"4674c3c7c46520ad1ec9d9632969e112c408028d41ea7281604f30aae83b3163"} Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.955341 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" event={"ID":"18ae7dff-85b9-4b11-be8a-b7afd856ebca","Type":"ContainerStarted","Data":"3ffbf1cd47f4998b0995245b08209e7a441387db531088562ba3bbe9c62ec79d"} Jan 20 09:21:07 crc kubenswrapper[4859]: I0120 09:21:07.966344 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zj874" podStartSLOduration=123.96632678 podStartE2EDuration="2m3.96632678s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:07.965770504 +0000 UTC m=+142.721786670" watchObservedRunningTime="2026-01-20 09:21:07.96632678 +0000 UTC m=+142.722342956" Jan 20 09:21:08 crc kubenswrapper[4859]: W0120 09:21:08.035776 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e7bf26_160c_4f98_b533_a9433061df3e.slice/crio-5b9a4fca4e51288df3fd6046e2e3f426d55d61ab47df94279ae95403e09edf45 WatchSource:0}: Error finding container 5b9a4fca4e51288df3fd6046e2e3f426d55d61ab47df94279ae95403e09edf45: Status 404 returned error can't find the container with id 5b9a4fca4e51288df3fd6046e2e3f426d55d61ab47df94279ae95403e09edf45 Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.054315 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:08 crc kubenswrapper[4859]: E0120 09:21:08.055476 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.555461235 +0000 UTC m=+143.311477411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:08 crc kubenswrapper[4859]: W0120 09:21:08.062434 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39408983_b8b3_4dc3_ab3c_f57031ce7a5d.slice/crio-53462d8f1092d9686aed765c47b763169468a288ed4470146e087b1d28030d26 WatchSource:0}: Error finding container 53462d8f1092d9686aed765c47b763169468a288ed4470146e087b1d28030d26: Status 404 returned error can't find the container with id 53462d8f1092d9686aed765c47b763169468a288ed4470146e087b1d28030d26 Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.155692 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:08 crc kubenswrapper[4859]: E0120 09:21:08.156074 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.656057417 +0000 UTC m=+143.412073593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.156254 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:08 crc kubenswrapper[4859]: E0120 09:21:08.156614 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.656600331 +0000 UTC m=+143.412616507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.256912 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:08 crc kubenswrapper[4859]: E0120 09:21:08.258877 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.758861039 +0000 UTC m=+143.514877215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.294625 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nnrsm"] Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.359229 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:08 crc kubenswrapper[4859]: E0120 09:21:08.359600 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.859580214 +0000 UTC m=+143.615596390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.463337 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:08 crc kubenswrapper[4859]: E0120 09:21:08.463830 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:08.963811036 +0000 UTC m=+143.719827212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.517622 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk"] Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.579625 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:08 crc kubenswrapper[4859]: E0120 09:21:08.580740 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.080724777 +0000 UTC m=+143.836740953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.682820 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:08 crc kubenswrapper[4859]: E0120 09:21:08.684271 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.184253329 +0000 UTC m=+143.940269505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:08 crc kubenswrapper[4859]: W0120 09:21:08.697641 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e8337b_a5c8_468f_a859_9fa0ba3eb981.slice/crio-24a807be7719827e1b4b070cb05eb0b7ca685c8f1035e15f86a5ec26bc3356d8 WatchSource:0}: Error finding container 24a807be7719827e1b4b070cb05eb0b7ca685c8f1035e15f86a5ec26bc3356d8: Status 404 returned error can't find the container with id 24a807be7719827e1b4b070cb05eb0b7ca685c8f1035e15f86a5ec26bc3356d8 Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.787433 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jx6p7"] Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.788507 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:08 crc kubenswrapper[4859]: E0120 09:21:08.788885 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.288868672 +0000 UTC m=+144.044884848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.798767 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw"] Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.800664 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7jqnq"] Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.829809 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq"] Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.859744 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv"] Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.897870 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:08 crc kubenswrapper[4859]: E0120 09:21:08.898221 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.398207084 +0000 UTC m=+144.154223260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.899884 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf"] Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.912416 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hqm8v"] Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.940921 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl"] Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.976453 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7jqnq" event={"ID":"bd9a36ff-f9d5-4694-bf93-8762ec135ca8","Type":"ContainerStarted","Data":"041a218218f2129081cd0dcb8b8444f47ede640b12a9f371067a157cd8cfd33c"} Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.983097 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5nzb7" event={"ID":"d80ee0cc-e67b-4fef-8977-cac3732b5cbf","Type":"ContainerStarted","Data":"e95b622691292d1ced2cd9eb8e18f616851d25aee96d722677c28ed4d6f6c993"} Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.984566 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.988606 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wffjq" event={"ID":"34034d1a-c537-454b-a196-592ec6f2e43f","Type":"ContainerStarted","Data":"dc6b4cb95147931d6d8871eb518009290f2170d0a0fe5086924c3b40908dba32"} Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.995491 4859 patch_prober.go:28] interesting pod/console-operator-58897d9998-5nzb7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 20 09:21:08 crc kubenswrapper[4859]: I0120 09:21:08.995549 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5nzb7" podUID="d80ee0cc-e67b-4fef-8977-cac3732b5cbf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:08.999156 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:08.999452 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.499438563 +0000 UTC m=+144.255454739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.010059 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" event={"ID":"4d189960-186e-471b-b3fa-456854ca6763","Type":"ContainerStarted","Data":"17aeb1e4b1c03c4f1d5cf010cf4fef70a3a76bde73f291526a50eebc6e5206b9"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.010105 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" event={"ID":"4d189960-186e-471b-b3fa-456854ca6763","Type":"ContainerStarted","Data":"7c3ca029dc75e3368a63ee69c8a23b0a5e2dd83243a4aabe49704a975646543c"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.038406 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" event={"ID":"22b0a1dc-3c28-4107-8c06-8b2518a35af5","Type":"ContainerStarted","Data":"d3ac78a4a4d0c77618db0a82edc3ec01d83e4c7bb49aac495619e4ce01a2ae55"} Jan 20 09:21:09 crc kubenswrapper[4859]: W0120 09:21:09.049693 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3dc8b34_a9f3_4bd9_9ece_39f09bf8ec27.slice/crio-4484645d98eaf60674ae0171c5cf9f2519d365d1c2704a7d3452e6b94215f977 WatchSource:0}: Error finding container 4484645d98eaf60674ae0171c5cf9f2519d365d1c2704a7d3452e6b94215f977: Status 404 returned error can't find the container with id 4484645d98eaf60674ae0171c5cf9f2519d365d1c2704a7d3452e6b94215f977 Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.050881 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" event={"ID":"1c7c602a-e7f3-42de-a0ab-38e317f8b4ed","Type":"ContainerStarted","Data":"fd174d37724b2444b767a273f827f5c7a50f95b3a7c32a10945d78024d5ded89"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.070251 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nnrsm" event={"ID":"a80a5e3b-eb7a-49f6-a9c5-1860decdfc75","Type":"ContainerStarted","Data":"e5ee2a3a2718eec4c0ce0268f410dd19fa14bb0c8a9553865f90e437c380d7c0"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.071859 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" event={"ID":"55e6a858-5ae7-4d3d-a454-227bf8b52195","Type":"ContainerStarted","Data":"af2815e3cf16c8960867edb7c077287d6a437027955777a07e3b299d5a0d45c1"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.072988 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" event={"ID":"1e3a4bb8-24b4-4e23-93f4-90685b01134b","Type":"ContainerStarted","Data":"13e8b261ccba2874a4b496bcc562af08f022d2732e10ea2bef6b86642aaed164"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.084127 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5nzb7" podStartSLOduration=125.084110286 podStartE2EDuration="2m5.084110286s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.083485199 +0000 UTC m=+143.839501375" watchObservedRunningTime="2026-01-20 09:21:09.084110286 +0000 UTC m=+143.840126462" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.101631 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.101821 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.601778353 +0000 UTC m=+144.357794529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.101868 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.102084 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" event={"ID":"8bc0a7bf-e710-45d8-90df-576a0cbcf06d","Type":"ContainerStarted","Data":"ceeda78246b1ee976e23477387496689692c2250818d94ef67da19dbe427b959"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.102757 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.103348 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.603332556 +0000 UTC m=+144.359348732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.114894 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" event={"ID":"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb","Type":"ContainerStarted","Data":"856b324ab2c8b8d8217e2bee7c4232f8283a303903ea4c253b9bf09846a834e2"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.129856 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" event={"ID":"4161b7ef-0bb8-47a9-a8f1-61804d03b08d","Type":"ContainerStarted","Data":"2e2e7ea05313836a6180ef2b3bfe9b9cf00bfefe4240623fadd5f8b820bfd08b"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.131753 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" event={"ID":"ed32b086-da01-429f-a440-1823d9d18e9a","Type":"ContainerStarted","Data":"772bc0c147e3fe015cda9fee76ddd8437b4addd634427974b50e3bc5724cc4d1"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.131800 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" event={"ID":"ed32b086-da01-429f-a440-1823d9d18e9a","Type":"ContainerStarted","Data":"3f6b04103e2eb86f2c3a9d24f1550f7ed66cdb7f2b5cf9e1f2ff273c0af6b62d"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.132505 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.136095 4859 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fvtsl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.136125 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" podUID="ed32b086-da01-429f-a440-1823d9d18e9a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.138267 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" event={"ID":"82733214-d1d7-49bd-ae3e-49ffd0c18c6e","Type":"ContainerStarted","Data":"5fed86d6246d26ef3e22caa054bdd9c0653cb4f82d3e0df12908cb8f473956ea"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.187339 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.193323 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" event={"ID":"e5c5042b-9158-4a46-b771-19f91eab097f","Type":"ContainerStarted","Data":"06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.193549 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" event={"ID":"e5c5042b-9158-4a46-b771-19f91eab097f","Type":"ContainerStarted","Data":"e5b1a6f13f35925a3498b2fd759c60d3ba02d19ee9a7c7b8a53dc48b77b64ae4"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.195400 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.197877 4859 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rpkn9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.198069 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" podUID="e5c5042b-9158-4a46-b771-19f91eab097f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.202879 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.203857 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.703842855 +0000 UTC m=+144.459859031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.210821 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" event={"ID":"99087dbb-8011-483e-87b6-fe5cb4bc203b","Type":"ContainerStarted","Data":"bb1e8657fe88d6f9ddaa22907b4aa3954d140f1f10e356187f4330d1643b9da2"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.211394 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.234878 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" event={"ID":"a8e8337b-a5c8-468f-a859-9fa0ba3eb981","Type":"ContainerStarted","Data":"24a807be7719827e1b4b070cb05eb0b7ca685c8f1035e15f86a5ec26bc3356d8"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.235638 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.272311 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lhhg9" podStartSLOduration=125.272292831 podStartE2EDuration="2m5.272292831s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.222803858 +0000 UTC m=+143.978820034" watchObservedRunningTime="2026-01-20 09:21:09.272292831 +0000 UTC m=+144.028309007" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.274707 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wffjq" podStartSLOduration=125.274699938 podStartE2EDuration="2m5.274699938s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.272202368 +0000 UTC m=+144.028218564" watchObservedRunningTime="2026-01-20 09:21:09.274699938 +0000 UTC m=+144.030716114" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.277160 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" event={"ID":"f6e7bf26-160c-4f98-b533-a9433061df3e","Type":"ContainerStarted","Data":"5b9a4fca4e51288df3fd6046e2e3f426d55d61ab47df94279ae95403e09edf45"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.279062 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" event={"ID":"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7","Type":"ContainerStarted","Data":"b56def0f7c302dc0da98174cb0aabafaa8f8da6c6e374ab1582536223dd0f80b"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.282172 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c4jrg" event={"ID":"39408983-b8b3-4dc3-ab3c-f57031ce7a5d","Type":"ContainerStarted","Data":"53462d8f1092d9686aed765c47b763169468a288ed4470146e087b1d28030d26"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.285584 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" event={"ID":"26f9bf13-ca6f-4939-aba2-10cf819a8f1d","Type":"ContainerStarted","Data":"069a25d8e03081dc2f0c2fb4af1648f37bfb0d1fe443c12940a4e98dad52e502"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.285743 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" event={"ID":"26f9bf13-ca6f-4939-aba2-10cf819a8f1d","Type":"ContainerStarted","Data":"8c6bcff23e32599eabbd49c552eee7dd817e57c84e6e9ab7c74e15dca2305908"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.289624 4859 generic.go:334] "Generic (PLEG): container finished" podID="7fbd020a-2b39-464a-a4af-965d3d5a4de1" containerID="e6708ba7f569c21d64166bd42acc6269fbe45b416a69e93709df6c02d78c4713" exitCode=0 Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.289799 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" event={"ID":"7fbd020a-2b39-464a-a4af-965d3d5a4de1","Type":"ContainerDied","Data":"e6708ba7f569c21d64166bd42acc6269fbe45b416a69e93709df6c02d78c4713"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.300246 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" event={"ID":"ea7aa939-02d2-43ce-b78a-5fb6e19d0f7d","Type":"ContainerStarted","Data":"2b9261f9ed3b2dd8ad40044fc839fe7debca79a1993d14e424a98a871b90894a"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.304268 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.309080 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.809062375 +0000 UTC m=+144.565078551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.326134 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" podStartSLOduration=125.326118315 podStartE2EDuration="2m5.326118315s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.295601833 +0000 UTC m=+144.051618009" watchObservedRunningTime="2026-01-20 09:21:09.326118315 +0000 UTC m=+144.082134491" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.326870 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" podStartSLOduration=124.326865645 podStartE2EDuration="2m4.326865645s" podCreationTimestamp="2026-01-20 09:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.324937452 +0000 UTC m=+144.080953628" watchObservedRunningTime="2026-01-20 09:21:09.326865645 +0000 UTC m=+144.082881811" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.328495 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" event={"ID":"05988ff3-118b-422f-aa51-ec26acb44fd5","Type":"ContainerStarted","Data":"001e3fe8e05764a7e8d59620ad37caee65daf16f7cda14a27226a70ac7dadac7"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.328719 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.357093 4859 generic.go:334] "Generic (PLEG): container finished" podID="feaf2105-331f-4c98-8f11-9680aa0f9330" containerID="806944a692b78a7f63f40a87c4e9e829332581da95a0f1eb1471263c0caf182b" exitCode=0 Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.357166 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" event={"ID":"feaf2105-331f-4c98-8f11-9680aa0f9330","Type":"ContainerDied","Data":"806944a692b78a7f63f40a87c4e9e829332581da95a0f1eb1471263c0caf182b"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.358303 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" podStartSLOduration=125.358174278 podStartE2EDuration="2m5.358174278s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.35792162 +0000 UTC m=+144.113937796" watchObservedRunningTime="2026-01-20 09:21:09.358174278 +0000 UTC m=+144.114190454" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.381237 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tljms" event={"ID":"c3eb4647-fc2e-430e-81cd-3f8c1c8bee50","Type":"ContainerStarted","Data":"961e2c31dac61c1e4c3e1aa2b0124f8b57eb0949bb6cc7d062d4c610f33a7a97"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.381278 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tljms" event={"ID":"c3eb4647-fc2e-430e-81cd-3f8c1c8bee50","Type":"ContainerStarted","Data":"fef2300c85795751f71c236c9764042d80fd4549145de3daf9568e0290721dd9"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.386089 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" podStartSLOduration=125.386070836 podStartE2EDuration="2m5.386070836s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.381028997 +0000 UTC m=+144.137045173" watchObservedRunningTime="2026-01-20 09:21:09.386070836 +0000 UTC m=+144.142087012" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.388587 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" event={"ID":"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9","Type":"ContainerStarted","Data":"8be30240239378fdfc6a36502fc944467ba470513ff1a13c6a0d3656cbcfec91"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.389455 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.395229 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" event={"ID":"dd9dab0e-3012-4373-b89c-83d39534771f","Type":"ContainerStarted","Data":"3ba10583e48cb6f329f3b62b216a3f660cc3ea0872d1c6290a4d5cf8ff79a5cd"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.400478 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" event={"ID":"6b827892-45de-41de-ae6a-fc9437a86871","Type":"ContainerStarted","Data":"15f4f8b569774a59e4726a1b3b51c376f5ce9ac92f21592f3ccc8e065619c5e5"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.405488 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.405563 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.905546592 +0000 UTC m=+144.661562768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.410623 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.416846 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:09.916829633 +0000 UTC m=+144.672845819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.418848 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" event={"ID":"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0","Type":"ContainerStarted","Data":"a88e341e6e709d442fd0d7e8869c7dd80f29cbaa3b91985693219859d21dcf72"} Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.420823 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2mnrr" podStartSLOduration=124.420806833 podStartE2EDuration="2m4.420806833s" podCreationTimestamp="2026-01-20 09:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.419299331 +0000 UTC m=+144.175315537" watchObservedRunningTime="2026-01-20 09:21:09.420806833 +0000 UTC m=+144.176823009" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.421406 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" podStartSLOduration=125.42139908 podStartE2EDuration="2m5.42139908s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.401045628 +0000 UTC m=+144.157061814" watchObservedRunningTime="2026-01-20 09:21:09.42139908 +0000 UTC m=+144.177415256" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.447529 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" podStartSLOduration=125.447512648 podStartE2EDuration="2m5.447512648s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.444615669 +0000 UTC m=+144.200631845" watchObservedRunningTime="2026-01-20 09:21:09.447512648 +0000 UTC m=+144.203528824" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.462104 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tljms" podStartSLOduration=8.462087071 podStartE2EDuration="8.462087071s" podCreationTimestamp="2026-01-20 09:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.461547145 +0000 UTC m=+144.217563341" watchObservedRunningTime="2026-01-20 09:21:09.462087071 +0000 UTC m=+144.218103247" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.481567 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tlm96" podStartSLOduration=125.481540997 podStartE2EDuration="2m5.481540997s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.479191722 +0000 UTC m=+144.235207898" watchObservedRunningTime="2026-01-20 09:21:09.481540997 +0000 UTC m=+144.237557173" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.512942 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.514297 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.014261168 +0000 UTC m=+144.770277354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.518552 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wfx2f" podStartSLOduration=125.518539116 podStartE2EDuration="2m5.518539116s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.518219587 +0000 UTC m=+144.274235763" watchObservedRunningTime="2026-01-20 09:21:09.518539116 +0000 UTC m=+144.274555292" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.537417 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" podStartSLOduration=125.537400406 podStartE2EDuration="2m5.537400406s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.535053171 +0000 UTC m=+144.291069347" watchObservedRunningTime="2026-01-20 09:21:09.537400406 +0000 UTC m=+144.293416582" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.550635 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" podStartSLOduration=125.55062077 podStartE2EDuration="2m5.55062077s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.549391636 +0000 UTC m=+144.305407822" watchObservedRunningTime="2026-01-20 09:21:09.55062077 +0000 UTC m=+144.306636946" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.593323 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" podStartSLOduration=125.593309686 podStartE2EDuration="2m5.593309686s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.564875522 +0000 UTC m=+144.320891808" watchObservedRunningTime="2026-01-20 09:21:09.593309686 +0000 UTC m=+144.349325862" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.602252 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" podStartSLOduration=125.602230332 podStartE2EDuration="2m5.602230332s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.60215235 +0000 UTC m=+144.358168516" watchObservedRunningTime="2026-01-20 09:21:09.602230332 +0000 UTC m=+144.358246508" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.614131 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.614598 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.114576982 +0000 UTC m=+144.870593158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.619248 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" podStartSLOduration=125.61923623 podStartE2EDuration="2m5.61923623s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:09.618581493 +0000 UTC m=+144.374597669" watchObservedRunningTime="2026-01-20 09:21:09.61923623 +0000 UTC m=+144.375252406" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.698694 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.700314 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.700357 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.714967 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.715109 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.215090561 +0000 UTC m=+144.971106737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.715232 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.715592 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.215578415 +0000 UTC m=+144.971594591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.815659 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.815899 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.315869777 +0000 UTC m=+145.071885963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.816172 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.816483 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.316473214 +0000 UTC m=+145.072489390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.916704 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.918596 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.418560687 +0000 UTC m=+145.174576863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:09 crc kubenswrapper[4859]: I0120 09:21:09.918737 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:09 crc kubenswrapper[4859]: E0120 09:21:09.919212 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.419201794 +0000 UTC m=+145.175217970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.019394 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.019717 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.519676133 +0000 UTC m=+145.275692339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.049049 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.049134 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.112416 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvvlc" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.120368 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.120760 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.620747628 +0000 UTC m=+145.376763804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.221275 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.221443 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.721415781 +0000 UTC m=+145.477431957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.221837 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.222095 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.7220841 +0000 UTC m=+145.478100276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.323049 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.323356 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.823331879 +0000 UTC m=+145.579348055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.390015 4859 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7fk4r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.390265 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" podUID="78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.420977 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wkjvd" event={"ID":"9af6cf92-9706-4582-ba03-32fb453da4bf","Type":"ContainerStarted","Data":"047fa6f338d66457e20949d3a2643d2e4d4ae438f20165bdb435668ef91d3b80"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.422282 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" event={"ID":"5e06e843-b10c-4d2e-beb8-45db4f8b0b22","Type":"ContainerStarted","Data":"da8eea599a71f65d9e13360b83f598bebd8c4a99d7b002bd1f7998316b4ee05f"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.422945 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" event={"ID":"b6433dee-b2ac-4bde-aea5-66d641ecdfa2","Type":"ContainerStarted","Data":"ef020600e7568258ddd15930cfcb316dc1f973c0559a3da61b5ca3c3a3361468"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.423701 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.424073 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:10.924055695 +0000 UTC m=+145.680071871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.424321 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" event={"ID":"e382f4ff-2fd6-4e40-8372-3d4871c075ec","Type":"ContainerStarted","Data":"37895a7445672f9bce07bab8c03864bc2e34cb2d81a778cbcd80a274f128b2b7"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.425306 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" event={"ID":"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb","Type":"ContainerStarted","Data":"5f03c5a1cae5b6fbc13225784d3da08189fd083e7aee7a36af59e846d1be24f1"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.426500 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" event={"ID":"f6e7bf26-160c-4f98-b533-a9433061df3e","Type":"ContainerStarted","Data":"dda355d60afe64996e5a6c5357d0f984565062f6c111c9e14bb359c77ccc36bb"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.427408 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" event={"ID":"f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2","Type":"ContainerStarted","Data":"d37b2061a0d09647e7cfdcce841cc29d6dbf9175bad95c6d8907a8e8077178e5"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.428882 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5sjq" event={"ID":"dd9dab0e-3012-4373-b89c-83d39534771f","Type":"ContainerStarted","Data":"96549cd0a69080e6776ca3e8b00639335c7e738a954315ea4b635ea14ad1b2a3"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.429993 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhngt" event={"ID":"71fa7ab3-5b15-4256-b6d0-87bf9a6d8cf0","Type":"ContainerStarted","Data":"105a1378e5fbc154617c92a070ba3953f99b28e2076469cf94563bd23870f72f"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.431188 4859 generic.go:334] "Generic (PLEG): container finished" podID="4d189960-186e-471b-b3fa-456854ca6763" containerID="17aeb1e4b1c03c4f1d5cf010cf4fef70a3a76bde73f291526a50eebc6e5206b9" exitCode=0 Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.431212 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" event={"ID":"4d189960-186e-471b-b3fa-456854ca6763","Type":"ContainerDied","Data":"17aeb1e4b1c03c4f1d5cf010cf4fef70a3a76bde73f291526a50eebc6e5206b9"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.432035 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" event={"ID":"f52b74ec-ff44-4ece-9ce7-9d71c781ede6","Type":"ContainerStarted","Data":"0bc1853d2780964e45371fa1e62370af79a4ecf3289e4757b867d488a752a4d4"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.433358 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" event={"ID":"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27","Type":"ContainerStarted","Data":"4484645d98eaf60674ae0171c5cf9f2519d365d1c2704a7d3452e6b94215f977"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.434353 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6h4hh" event={"ID":"db9a62f9-0fb2-471d-8abc-4ebf9ab08ca7","Type":"ContainerStarted","Data":"22b7779761c36a13f5709427780c9da9b0671708db9552ef209f8a0d3313ca03"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.435673 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" event={"ID":"82733214-d1d7-49bd-ae3e-49ffd0c18c6e","Type":"ContainerStarted","Data":"90f2eebd385888b0aa5b0c0b90fa8b880dc1679a8be7b018ec5d6e7e08814c3f"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.436378 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw" event={"ID":"8abb1de6-2a1d-4144-9836-f0ac771b67ce","Type":"ContainerStarted","Data":"09277fcc22261024ba86995474b9322a27e7d04f1902ed98a9b168fe7999355a"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.437578 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v9zft" event={"ID":"417685e2-532e-4391-828b-1696f0be8f9d","Type":"ContainerStarted","Data":"f552fcb8bc5234e2ae9ed01250c777d91072e0b252a214991b250ea228a81640"} Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.438106 4859 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rpkn9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.438142 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" podUID="e5c5042b-9158-4a46-b771-19f91eab097f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.438143 4859 patch_prober.go:28] interesting pod/console-operator-58897d9998-5nzb7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.438203 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5nzb7" podUID="d80ee0cc-e67b-4fef-8977-cac3732b5cbf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.439502 4859 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fvtsl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.439903 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" podUID="ed32b086-da01-429f-a440-1823d9d18e9a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.443344 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wkjvd" podStartSLOduration=9.443328306 podStartE2EDuration="9.443328306s" podCreationTimestamp="2026-01-20 09:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:10.439287644 +0000 UTC m=+145.195303820" watchObservedRunningTime="2026-01-20 09:21:10.443328306 +0000 UTC m=+145.199344482" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.525183 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.525917 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.025900241 +0000 UTC m=+145.781916417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.592714 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xnjx9"] Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.593568 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.598283 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.605930 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xnjx9"] Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.629079 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.631648 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.131633004 +0000 UTC m=+145.887649180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.699342 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.699438 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.729941 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.730127 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.230090546 +0000 UTC m=+145.986106752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.730209 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzs94\" (UniqueName: \"kubernetes.io/projected/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-kube-api-access-bzs94\") pod \"certified-operators-xnjx9\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.730529 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-utilities\") pod \"certified-operators-xnjx9\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.730607 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.730707 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-catalog-content\") pod \"certified-operators-xnjx9\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.730900 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.230888248 +0000 UTC m=+145.986904424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.790586 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2cm2d"] Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.791633 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.801385 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.814913 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2cm2d"] Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.831392 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.831583 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.331559502 +0000 UTC m=+146.087575678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.831836 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-catalog-content\") pod \"certified-operators-xnjx9\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.831935 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzs94\" (UniqueName: \"kubernetes.io/projected/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-kube-api-access-bzs94\") pod \"certified-operators-xnjx9\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.832085 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-utilities\") pod \"certified-operators-xnjx9\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.832174 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.832488 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.332476748 +0000 UTC m=+146.088492924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.832528 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-utilities\") pod \"certified-operators-xnjx9\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.832755 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-catalog-content\") pod \"certified-operators-xnjx9\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.859772 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzs94\" (UniqueName: \"kubernetes.io/projected/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-kube-api-access-bzs94\") pod \"certified-operators-xnjx9\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.909853 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.933683 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.933891 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.433856341 +0000 UTC m=+146.189872557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.934005 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-utilities\") pod \"community-operators-2cm2d\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.934105 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.934292 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggch5\" (UniqueName: \"kubernetes.io/projected/662d5810-d101-40f8-9cf9-6e46d3177b6a-kube-api-access-ggch5\") pod \"community-operators-2cm2d\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.934401 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-catalog-content\") pod \"community-operators-2cm2d\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:10 crc kubenswrapper[4859]: E0120 09:21:10.934596 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.43457745 +0000 UTC m=+146.190593656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.989030 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9sm4f"] Jan 20 09:21:10 crc kubenswrapper[4859]: I0120 09:21:10.989923 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.006296 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9sm4f"] Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.035442 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.035709 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.535671816 +0000 UTC m=+146.291688032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.035918 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggch5\" (UniqueName: \"kubernetes.io/projected/662d5810-d101-40f8-9cf9-6e46d3177b6a-kube-api-access-ggch5\") pod \"community-operators-2cm2d\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.036004 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-catalog-content\") pod \"community-operators-2cm2d\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.036052 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-utilities\") pod \"community-operators-2cm2d\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.036124 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.036588 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.536535519 +0000 UTC m=+146.292551735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.038400 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-utilities\") pod \"community-operators-2cm2d\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.038677 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-catalog-content\") pod \"community-operators-2cm2d\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.055395 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggch5\" (UniqueName: \"kubernetes.io/projected/662d5810-d101-40f8-9cf9-6e46d3177b6a-kube-api-access-ggch5\") pod \"community-operators-2cm2d\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.112323 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.136804 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.136977 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-catalog-content\") pod \"certified-operators-9sm4f\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.137089 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.637049689 +0000 UTC m=+146.393065905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.137159 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjw2d\" (UniqueName: \"kubernetes.io/projected/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-kube-api-access-pjw2d\") pod \"certified-operators-9sm4f\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.137354 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-utilities\") pod \"certified-operators-9sm4f\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.137563 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.138110 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.638084448 +0000 UTC m=+146.394100664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.204919 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nklcx"] Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.207449 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.242399 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.242508 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-catalog-content\") pod \"community-operators-nklcx\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.242539 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-utilities\") pod \"community-operators-nklcx\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.242560 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-catalog-content\") pod \"certified-operators-9sm4f\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.242592 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjw2d\" (UniqueName: \"kubernetes.io/projected/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-kube-api-access-pjw2d\") pod \"certified-operators-9sm4f\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.242616 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89vt\" (UniqueName: \"kubernetes.io/projected/e1957112-94e7-495e-8d4a-bb9bac57988c-kube-api-access-h89vt\") pod \"community-operators-nklcx\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.242648 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-utilities\") pod \"certified-operators-9sm4f\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.243077 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-utilities\") pod \"certified-operators-9sm4f\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.243153 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.743140142 +0000 UTC m=+146.499156318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.243402 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-catalog-content\") pod \"certified-operators-9sm4f\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.252029 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nklcx"] Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.282959 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjw2d\" (UniqueName: \"kubernetes.io/projected/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-kube-api-access-pjw2d\") pod \"certified-operators-9sm4f\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.325297 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.345633 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.345700 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-catalog-content\") pod \"community-operators-nklcx\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.345741 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-utilities\") pod \"community-operators-nklcx\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.345829 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h89vt\" (UniqueName: \"kubernetes.io/projected/e1957112-94e7-495e-8d4a-bb9bac57988c-kube-api-access-h89vt\") pod \"community-operators-nklcx\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.346119 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.846105778 +0000 UTC m=+146.602121954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.346935 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-catalog-content\") pod \"community-operators-nklcx\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.346975 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-utilities\") pod \"community-operators-nklcx\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.375961 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h89vt\" (UniqueName: \"kubernetes.io/projected/e1957112-94e7-495e-8d4a-bb9bac57988c-kube-api-access-h89vt\") pod \"community-operators-nklcx\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.438882 4859 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7fk4r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.438961 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" podUID="78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.447523 4859 csr.go:261] certificate signing request csr-dxrfq is approved, waiting to be issued Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.448124 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.448519 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:11.948505341 +0000 UTC m=+146.704521517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.465301 4859 csr.go:257] certificate signing request csr-dxrfq is issued Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.521739 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" event={"ID":"0e70b9af-669a-4ef2-b830-ed7b6d7e12fb","Type":"ContainerStarted","Data":"db1313b940b6087eee35f944de8f5387d195aed8fb8de906658a3b2da445bbf4"} Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.546245 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.547876 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dcnjz" podStartSLOduration=127.547864007 podStartE2EDuration="2m7.547864007s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:11.546766268 +0000 UTC m=+146.302782444" watchObservedRunningTime="2026-01-20 09:21:11.547864007 +0000 UTC m=+146.303880183" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.549986 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.550279 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.050263094 +0000 UTC m=+146.806279280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.607541 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" event={"ID":"b6433dee-b2ac-4bde-aea5-66d641ecdfa2","Type":"ContainerStarted","Data":"7e7bf6f86bc9505dd0f242ae24753f5c264a9b5f678b6f2d22a952b8d739fa3a"} Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.609434 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c4jrg" event={"ID":"39408983-b8b3-4dc3-ab3c-f57031ce7a5d","Type":"ContainerStarted","Data":"319628f9c84f543d49bd5a5d9279b158b8e64e866805458062e63ec5c460c4fb"} Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.652226 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.652639 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.152623394 +0000 UTC m=+146.908639570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.660182 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" event={"ID":"22b0a1dc-3c28-4107-8c06-8b2518a35af5","Type":"ContainerStarted","Data":"d99311869e5081ef1c6fa5dd7413bfe79e37c2c6e36a2f4453e7e0e80704e0ef"} Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.711157 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" event={"ID":"26f9bf13-ca6f-4939-aba2-10cf819a8f1d","Type":"ContainerStarted","Data":"dfdf96192a30014b76dcd09cc579548d01170635bd67542ab81a41049cb139ca"} Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.717717 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" event={"ID":"1c7c602a-e7f3-42de-a0ab-38e317f8b4ed","Type":"ContainerStarted","Data":"6870e6765192783f65262cc5e15ea818103c4642a8f1de34da95fbfebb103cc5"} Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.721153 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:11 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:11 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:11 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.721187 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.738127 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" event={"ID":"55e6a858-5ae7-4d3d-a454-227bf8b52195","Type":"ContainerStarted","Data":"40ebd189791788cbe93cdb0414bd2233607596c94d3ed64b55a2d1ccb9926537"} Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.755597 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.757343 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.257330469 +0000 UTC m=+147.013346645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.777575 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-jx6p7" podStartSLOduration=126.777558417 podStartE2EDuration="2m6.777558417s" podCreationTimestamp="2026-01-20 09:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:11.710057306 +0000 UTC m=+146.466073472" watchObservedRunningTime="2026-01-20 09:21:11.777558417 +0000 UTC m=+146.533574593" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.777952 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rdgmg" podStartSLOduration=127.777948407 podStartE2EDuration="2m7.777948407s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:11.773356871 +0000 UTC m=+146.529373047" watchObservedRunningTime="2026-01-20 09:21:11.777948407 +0000 UTC m=+146.533964573" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.787475 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nnrsm" event={"ID":"a80a5e3b-eb7a-49f6-a9c5-1860decdfc75","Type":"ContainerStarted","Data":"768a3b1749f34ebdee2660650603bc8fd7ed1cc462f26c125232693e7d5b97ba"} Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.789481 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-nnrsm" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.809168 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" event={"ID":"a8e8337b-a5c8-468f-a859-9fa0ba3eb981","Type":"ContainerStarted","Data":"dd5333e874ac5ad98f31870f510618c5073fa582e1c7fd321907a56ef869b01b"} Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.832631 4859 patch_prober.go:28] interesting pod/downloads-7954f5f757-nnrsm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.832696 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nnrsm" podUID="a80a5e3b-eb7a-49f6-a9c5-1860decdfc75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.835825 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" event={"ID":"89c4974e-874d-414c-9a9b-987b6e9c9a5c","Type":"ContainerStarted","Data":"e888431d556550d43d7239a82d328ce9ad700fd991ce8f5798294ec5db4688e1"} Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.857071 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.859687 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.359668428 +0000 UTC m=+147.115684604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.860274 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.863612 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.363595727 +0000 UTC m=+147.119611903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.864529 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.870957 4859 patch_prober.go:28] interesting pod/console-operator-58897d9998-5nzb7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.873923 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-45s4x" podStartSLOduration=127.873910541 podStartE2EDuration="2m7.873910541s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:11.831305697 +0000 UTC m=+146.587321873" watchObservedRunningTime="2026-01-20 09:21:11.873910541 +0000 UTC m=+146.629926717" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.903040 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-998nl" podStartSLOduration=127.903023963 podStartE2EDuration="2m7.903023963s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:11.872902924 +0000 UTC m=+146.628919100" watchObservedRunningTime="2026-01-20 09:21:11.903023963 +0000 UTC m=+146.659040139" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.904249 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5nzb7" podUID="d80ee0cc-e67b-4fef-8977-cac3732b5cbf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.927068 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn6p8" podStartSLOduration=127.927051665 podStartE2EDuration="2m7.927051665s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:11.925946735 +0000 UTC m=+146.681962911" watchObservedRunningTime="2026-01-20 09:21:11.927051665 +0000 UTC m=+146.683067841" Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.965015 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:11 crc kubenswrapper[4859]: E0120 09:21:11.965638 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.465623438 +0000 UTC m=+147.221639604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:11 crc kubenswrapper[4859]: I0120 09:21:11.991577 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-nnrsm" podStartSLOduration=127.991556642 podStartE2EDuration="2m7.991556642s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:11.99072932 +0000 UTC m=+146.746745496" watchObservedRunningTime="2026-01-20 09:21:11.991556642 +0000 UTC m=+146.747572818" Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.068811 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:12 crc kubenswrapper[4859]: E0120 09:21:12.070769 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.570755545 +0000 UTC m=+147.326771721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.171337 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:12 crc kubenswrapper[4859]: E0120 09:21:12.173116 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.673099064 +0000 UTC m=+147.429115240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.275277 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:12 crc kubenswrapper[4859]: E0120 09:21:12.275627 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.775614589 +0000 UTC m=+147.531630765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.387564 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:12 crc kubenswrapper[4859]: E0120 09:21:12.387776 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.887749888 +0000 UTC m=+147.643766074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.388163 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:12 crc kubenswrapper[4859]: E0120 09:21:12.388532 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.88852103 +0000 UTC m=+147.644537276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.466874 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-20 09:16:11 +0000 UTC, rotation deadline is 2026-10-04 02:59:33.400967571 +0000 UTC Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.467110 4859 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6161h38m20.933860176s for next certificate rotation Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.494476 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:12 crc kubenswrapper[4859]: E0120 09:21:12.494808 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:12.994791898 +0000 UTC m=+147.750808074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.596847 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:12 crc kubenswrapper[4859]: E0120 09:21:12.597159 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.097147008 +0000 UTC m=+147.853163174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.633553 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2cm2d"] Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.682168 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fvtsl" Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.698372 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:12 crc kubenswrapper[4859]: E0120 09:21:12.698736 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.198720526 +0000 UTC m=+147.954736702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.715574 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:12 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:12 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:12 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.715626 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.804470 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:12 crc kubenswrapper[4859]: E0120 09:21:12.804820 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.304808969 +0000 UTC m=+148.060825145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.830407 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xnjx9"] Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.841055 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qcvpq"] Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.841961 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.852765 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.876885 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcvpq"] Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.884315 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" event={"ID":"e382f4ff-2fd6-4e40-8372-3d4871c075ec","Type":"ContainerStarted","Data":"1f025e243f85652712ebfd749b47895513956cc022ae0250d98556fd8989cadf"} Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.884379 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.905951 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.906241 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-utilities\") pod \"redhat-marketplace-qcvpq\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.906307 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-catalog-content\") pod \"redhat-marketplace-qcvpq\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.906360 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87j4d\" (UniqueName: \"kubernetes.io/projected/2190970d-eb97-4db5-8cb2-ad14997411ab-kube-api-access-87j4d\") pod \"redhat-marketplace-qcvpq\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:12 crc kubenswrapper[4859]: E0120 09:21:12.906471 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.40645657 +0000 UTC m=+148.162472746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.930975 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" event={"ID":"82733214-d1d7-49bd-ae3e-49ffd0c18c6e","Type":"ContainerStarted","Data":"516d9118245aebfeeaa6df28e51281ff73f9b45a98a0eb2d334231cacc9bebab"} Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.942087 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.991843 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw" event={"ID":"8abb1de6-2a1d-4144-9836-f0ac771b67ce","Type":"ContainerStarted","Data":"3da5626a99f7724532a66602ed2bad5be04448c431245e833f87d6d7bd80bcd1"} Jan 20 09:21:12 crc kubenswrapper[4859]: I0120 09:21:12.999879 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cm2d" event={"ID":"662d5810-d101-40f8-9cf9-6e46d3177b6a","Type":"ContainerStarted","Data":"05f6e9e0ad0fb4527966a68390312e9f1a6d43044d576cb72917afc4259beb7b"} Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.014084 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-utilities\") pod \"redhat-marketplace-qcvpq\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.014130 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.014182 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-catalog-content\") pod \"redhat-marketplace-qcvpq\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.014248 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87j4d\" (UniqueName: \"kubernetes.io/projected/2190970d-eb97-4db5-8cb2-ad14997411ab-kube-api-access-87j4d\") pod \"redhat-marketplace-qcvpq\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.015726 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-utilities\") pod \"redhat-marketplace-qcvpq\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.015989 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.515977787 +0000 UTC m=+148.271993963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.016152 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9sm4f"] Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.016650 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-catalog-content\") pod \"redhat-marketplace-qcvpq\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.041803 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cckbv" podStartSLOduration=129.041762917 podStartE2EDuration="2m9.041762917s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:12.999952366 +0000 UTC m=+147.755968542" watchObservedRunningTime="2026-01-20 09:21:13.041762917 +0000 UTC m=+147.797779093" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.064849 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" event={"ID":"5e06e843-b10c-4d2e-beb8-45db4f8b0b22","Type":"ContainerStarted","Data":"062714a4dd0b55c9512d90c51913e3411beeb2aaeed5785beebbfd0ef5bb4564"} Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.116612 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.117302 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.617282788 +0000 UTC m=+148.373298964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.145736 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" event={"ID":"feaf2105-331f-4c98-8f11-9680aa0f9330","Type":"ContainerStarted","Data":"424e466bef6827c91e2f996c872e5b6bfedfd99a32206718424fe6ae18d4d7c4"} Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.155810 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87j4d\" (UniqueName: \"kubernetes.io/projected/2190970d-eb97-4db5-8cb2-ad14997411ab-kube-api-access-87j4d\") pod \"redhat-marketplace-qcvpq\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.206087 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" event={"ID":"4d189960-186e-471b-b3fa-456854ca6763","Type":"ContainerStarted","Data":"bfb0617ff2cdf3b446f70c328f2a5e17db0078d8921f14e1da6b967e2cdc48f0"} Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.206770 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.213216 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.218451 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.220560 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.720547813 +0000 UTC m=+148.476563989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.227892 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wm427"] Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.228967 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nklcx"] Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.229043 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.243415 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2pj89" podStartSLOduration=129.243396263 podStartE2EDuration="2m9.243396263s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:13.241810999 +0000 UTC m=+147.997827175" watchObservedRunningTime="2026-01-20 09:21:13.243396263 +0000 UTC m=+147.999412429" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.259533 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm427"] Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.317242 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" event={"ID":"f52b74ec-ff44-4ece-9ce7-9d71c781ede6","Type":"ContainerStarted","Data":"6b25a3f0d9d93c3873a409d950ab8345261bd9c339ed2f0e7e53895011ab2bb5"} Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.321921 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" event={"ID":"c3dc8b34-a9f3-4bd9-9ece-39f09bf8ec27","Type":"ContainerStarted","Data":"a9925de14298e349843b988c5f2f9dfe8decf2a67234ec66ca898b55eaaf07a9"} Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.326961 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" podStartSLOduration=129.326945645 podStartE2EDuration="2m9.326945645s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:13.282373917 +0000 UTC m=+148.038390093" watchObservedRunningTime="2026-01-20 09:21:13.326945645 +0000 UTC m=+148.082961811" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.328754 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.329180 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-utilities\") pod \"redhat-marketplace-wm427\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.329280 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkkps\" (UniqueName: \"kubernetes.io/projected/445caea8-7708-4332-b903-dd1b9409c756-kube-api-access-nkkps\") pod \"redhat-marketplace-wm427\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.330963 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.830942185 +0000 UTC m=+148.586958361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.334690 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.334373 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" podStartSLOduration=129.334360069 podStartE2EDuration="2m9.334360069s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:13.328118297 +0000 UTC m=+148.084134473" watchObservedRunningTime="2026-01-20 09:21:13.334360069 +0000 UTC m=+148.090376245" Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.335768 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.835755647 +0000 UTC m=+148.591771833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.335972 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-catalog-content\") pod \"redhat-marketplace-wm427\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.351928 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7jqnq" event={"ID":"bd9a36ff-f9d5-4694-bf93-8762ec135ca8","Type":"ContainerStarted","Data":"fe5deb3f5b61474fa5387959b09c20f186c2c30c13d58745e41fc705da737efc"} Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.401865 4859 patch_prober.go:28] interesting pod/downloads-7954f5f757-nnrsm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.401902 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nnrsm" podUID="a80a5e3b-eb7a-49f6-a9c5-1860decdfc75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.438153 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.438537 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-catalog-content\") pod \"redhat-marketplace-wm427\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.438595 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-utilities\") pod \"redhat-marketplace-wm427\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.438613 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkkps\" (UniqueName: \"kubernetes.io/projected/445caea8-7708-4332-b903-dd1b9409c756-kube-api-access-nkkps\") pod \"redhat-marketplace-wm427\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.439096 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:13.939068904 +0000 UTC m=+148.695085090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.440989 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-catalog-content\") pod \"redhat-marketplace-wm427\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.441322 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-utilities\") pod \"redhat-marketplace-wm427\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.462110 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2mcf" podStartSLOduration=129.462096989 podStartE2EDuration="2m9.462096989s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:13.421811829 +0000 UTC m=+148.177828005" watchObservedRunningTime="2026-01-20 09:21:13.462096989 +0000 UTC m=+148.218113155" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.462635 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ntdzq" podStartSLOduration=129.462629333 podStartE2EDuration="2m9.462629333s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:13.458184261 +0000 UTC m=+148.214200437" watchObservedRunningTime="2026-01-20 09:21:13.462629333 +0000 UTC m=+148.218645509" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.468101 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkkps\" (UniqueName: \"kubernetes.io/projected/445caea8-7708-4332-b903-dd1b9409c756-kube-api-access-nkkps\") pod \"redhat-marketplace-wm427\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.531528 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-7jqnq" podStartSLOduration=129.531511331 podStartE2EDuration="2m9.531511331s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:13.530935536 +0000 UTC m=+148.286951712" watchObservedRunningTime="2026-01-20 09:21:13.531511331 +0000 UTC m=+148.287527507" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.533866 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" podStartSLOduration=129.533857816 podStartE2EDuration="2m9.533857816s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:13.489289538 +0000 UTC m=+148.245305714" watchObservedRunningTime="2026-01-20 09:21:13.533857816 +0000 UTC m=+148.289873992" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.540822 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.541015 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.541135 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.541449 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.543044 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.544214 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.044200921 +0000 UTC m=+148.800217097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.551294 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.557324 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.590660 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.608022 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.615038 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.648889 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.649138 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.651244 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.15122766 +0000 UTC m=+148.907243836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.664385 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.709657 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:13 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:13 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:13 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.709716 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.753701 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.754293 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.254277439 +0000 UTC m=+149.010293615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.855746 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.856074 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.356059073 +0000 UTC m=+149.112075249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.903283 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.953961 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.953996 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:13 crc kubenswrapper[4859]: I0120 09:21:13.958472 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:13 crc kubenswrapper[4859]: E0120 09:21:13.958809 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.458797173 +0000 UTC m=+149.214813349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.003817 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jvdhk"] Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.004894 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.007628 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.017870 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvdhk"] Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.047462 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcvpq"] Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.059873 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:14 crc kubenswrapper[4859]: E0120 09:21:14.060261 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.560247079 +0000 UTC m=+149.316263255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.160829 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssgwr\" (UniqueName: \"kubernetes.io/projected/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-kube-api-access-ssgwr\") pod \"redhat-operators-jvdhk\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.161308 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.161333 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-catalog-content\") pod \"redhat-operators-jvdhk\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.161368 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-utilities\") pod \"redhat-operators-jvdhk\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: E0120 09:21:14.161627 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.661615082 +0000 UTC m=+149.417631258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.262269 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.262416 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-catalog-content\") pod \"redhat-operators-jvdhk\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.262437 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-utilities\") pod \"redhat-operators-jvdhk\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.262453 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssgwr\" (UniqueName: \"kubernetes.io/projected/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-kube-api-access-ssgwr\") pod \"redhat-operators-jvdhk\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: E0120 09:21:14.262496 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.76246444 +0000 UTC m=+149.518480616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.264372 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-catalog-content\") pod \"redhat-operators-jvdhk\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.267044 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-utilities\") pod \"redhat-operators-jvdhk\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.281536 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssgwr\" (UniqueName: \"kubernetes.io/projected/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-kube-api-access-ssgwr\") pod \"redhat-operators-jvdhk\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.338525 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.364361 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:14 crc kubenswrapper[4859]: E0120 09:21:14.364717 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.864702767 +0000 UTC m=+149.620718943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.387460 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sx2vc"] Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.388643 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.393989 4859 patch_prober.go:28] interesting pod/downloads-7954f5f757-nnrsm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.394038 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nnrsm" podUID="a80a5e3b-eb7a-49f6-a9c5-1860decdfc75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.394673 4859 patch_prober.go:28] interesting pod/downloads-7954f5f757-nnrsm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.394696 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nnrsm" podUID="a80a5e3b-eb7a-49f6-a9c5-1860decdfc75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.398848 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sx2vc"] Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.421740 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm427"] Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.464595 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" event={"ID":"f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2","Type":"ContainerStarted","Data":"bb8aad032a588e627333afb0d7a7f43e9f25057082c7247fcfbf70e6f108479e"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.465117 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:14 crc kubenswrapper[4859]: E0120 09:21:14.465475 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.965452403 +0000 UTC m=+149.721468579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.465562 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26g24\" (UniqueName: \"kubernetes.io/projected/a8295b62-6cc7-4fed-985e-268605a7e4f0-kube-api-access-26g24\") pod \"redhat-operators-sx2vc\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.465613 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-catalog-content\") pod \"redhat-operators-sx2vc\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.465638 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.465672 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-utilities\") pod \"redhat-operators-sx2vc\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: E0120 09:21:14.465962 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:14.965946857 +0000 UTC m=+149.721963023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.476260 4859 generic.go:334] "Generic (PLEG): container finished" podID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerID="505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686" exitCode=0 Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.476325 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cm2d" event={"ID":"662d5810-d101-40f8-9cf9-6e46d3177b6a","Type":"ContainerDied","Data":"505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.482631 4859 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.483084 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a633c81e2a7a1053744616c0a618f80131782f47f5df0fee9704d716a9f9c5b1"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.499072 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" event={"ID":"a8e8337b-a5c8-468f-a859-9fa0ba3eb981","Type":"ContainerStarted","Data":"3f537a5df6d83af6762bd44be4a7ed80d1c5518b06f8cd9c2e4f51d9a8555e90"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.499875 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.515567 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqwvl" event={"ID":"f52b74ec-ff44-4ece-9ce7-9d71c781ede6","Type":"ContainerStarted","Data":"9aa69e2afb7775050506e92a3b6546b6134ee5b5c8cbb140e528bde733c0c115"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.521462 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nklcx" event={"ID":"e1957112-94e7-495e-8d4a-bb9bac57988c","Type":"ContainerStarted","Data":"67b81ff418ddb614e8a33996e9bb53da1f239095b879cbf7e100aac70549c9d5"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.542179 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.544177 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c4jrg" event={"ID":"39408983-b8b3-4dc3-ab3c-f57031ce7a5d","Type":"ContainerStarted","Data":"7e0d12a15ee0437758dd4e94f7f0c30777231d615155968b6e7693a0ca7db079"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.544443 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.560432 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw" event={"ID":"8abb1de6-2a1d-4144-9836-f0ac771b67ce","Type":"ContainerStarted","Data":"286bff62c160a3cdfd493fdf8b1193bb1b292da9d068a69b2f0838ca3ba006d3"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.562053 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnjx9" event={"ID":"0c0ea750-41ef-4b4e-a574-2e50b3563f8b","Type":"ContainerStarted","Data":"aa1281fb40a65cd866f8fe24d8890ff7ffa5b051c5e5343c456be142470b978c"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.562070 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnjx9" event={"ID":"0c0ea750-41ef-4b4e-a574-2e50b3563f8b","Type":"ContainerStarted","Data":"aa9453135751697b6d09e88cccb9d00f6a6f229552c9671db69d32b6afd2f943"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.566945 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.567192 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26g24\" (UniqueName: \"kubernetes.io/projected/a8295b62-6cc7-4fed-985e-268605a7e4f0-kube-api-access-26g24\") pod \"redhat-operators-sx2vc\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.567250 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-catalog-content\") pod \"redhat-operators-sx2vc\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.567283 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-utilities\") pod \"redhat-operators-sx2vc\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: E0120 09:21:14.568249 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.068228755 +0000 UTC m=+149.824244941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.569442 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-catalog-content\") pod \"redhat-operators-sx2vc\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.569726 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-utilities\") pod \"redhat-operators-sx2vc\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.579268 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvpq" event={"ID":"2190970d-eb97-4db5-8cb2-ad14997411ab","Type":"ContainerStarted","Data":"0e1c5fe67f6374ad7c847d4b99c8d0791d1ac2d0127bd487696541b660189072"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.596277 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.596327 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.599012 4859 patch_prober.go:28] interesting pod/console-f9d7485db-7jqnq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.599066 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7jqnq" podUID="bd9a36ff-f9d5-4694-bf93-8762ec135ca8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.599493 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sm4f" event={"ID":"cf69514d-13ef-4ba7-9a8a-1d2656df59fb","Type":"ContainerStarted","Data":"945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.599531 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sm4f" event={"ID":"cf69514d-13ef-4ba7-9a8a-1d2656df59fb","Type":"ContainerStarted","Data":"b7b203c3fd105c6f23f5e8478b09384ed51710a81e32b4e05b74f07ee2e0a1ad"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.600384 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26g24\" (UniqueName: \"kubernetes.io/projected/a8295b62-6cc7-4fed-985e-268605a7e4f0-kube-api-access-26g24\") pod \"redhat-operators-sx2vc\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.630255 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" event={"ID":"7fbd020a-2b39-464a-a4af-965d3d5a4de1","Type":"ContainerStarted","Data":"e859103015c717bbf17879fa2ff39cf36fa147dd6d5a339e49bc62f2170f531d"} Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.630430 4859 patch_prober.go:28] interesting pod/downloads-7954f5f757-nnrsm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.630464 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nnrsm" podUID="a80a5e3b-eb7a-49f6-a9c5-1860decdfc75" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.653350 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5nzb7" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.669957 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:14 crc kubenswrapper[4859]: E0120 09:21:14.702411 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.202388161 +0000 UTC m=+149.958404327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.706297 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.708117 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:14 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:14 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:14 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.708160 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.710874 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.799918 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:14 crc kubenswrapper[4859]: E0120 09:21:14.801725 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.301700007 +0000 UTC m=+150.057716213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.805383 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvdhk"] Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.862072 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" podStartSLOduration=130.862050701 podStartE2EDuration="2m10.862050701s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:14.860117827 +0000 UTC m=+149.616134013" watchObservedRunningTime="2026-01-20 09:21:14.862050701 +0000 UTC m=+149.618066877" Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.901828 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:14 crc kubenswrapper[4859]: E0120 09:21:14.902242 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.402226857 +0000 UTC m=+150.158243033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:14 crc kubenswrapper[4859]: I0120 09:21:14.979821 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-c4jrg" podStartSLOduration=13.979805555 podStartE2EDuration="13.979805555s" podCreationTimestamp="2026-01-20 09:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:14.976209186 +0000 UTC m=+149.732225362" watchObservedRunningTime="2026-01-20 09:21:14.979805555 +0000 UTC m=+149.735821731" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.010595 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.010929 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.510913552 +0000 UTC m=+150.266929728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.034852 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bppzw" podStartSLOduration=131.034832701 podStartE2EDuration="2m11.034832701s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:15.033513254 +0000 UTC m=+149.789529430" watchObservedRunningTime="2026-01-20 09:21:15.034832701 +0000 UTC m=+149.790848877" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.111823 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.112456 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.612436569 +0000 UTC m=+150.368452745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.187497 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.213123 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.213303 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.713278277 +0000 UTC m=+150.469294453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.213355 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.213646 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.713639208 +0000 UTC m=+150.469655384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.273280 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sx2vc"] Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.314436 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.314893 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.814871687 +0000 UTC m=+150.570887863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.314949 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.315242 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.815233296 +0000 UTC m=+150.571249482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.431297 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.431856 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:15.931835349 +0000 UTC m=+150.687851525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.533451 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.533893 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:16.03387628 +0000 UTC m=+150.789892456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.634594 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.635807 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:16.135746347 +0000 UTC m=+150.891762513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.636109 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.636501 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:21:16.136482278 +0000 UTC m=+150.892498454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxvwj" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.645516 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" event={"ID":"f4b1f233-d1d2-4f7e-b4cb-a05b7f0f48c2","Type":"ContainerStarted","Data":"4b0174ab4f3c2720b3af5a76d350ebd7d3e28dace1880f16cd4e6b705fefcfdd"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.647966 4859 generic.go:334] "Generic (PLEG): container finished" podID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerID="945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474" exitCode=0 Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.648026 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sm4f" event={"ID":"cf69514d-13ef-4ba7-9a8a-1d2656df59fb","Type":"ContainerDied","Data":"945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.667992 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" event={"ID":"7fbd020a-2b39-464a-a4af-965d3d5a4de1","Type":"ContainerStarted","Data":"631a4ac1c3dbeba585b35a8070c4205c18f01b9c8dfe0b4b2b31444dd94c7414"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.670086 4859 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.680067 4859 generic.go:334] "Generic (PLEG): container finished" podID="445caea8-7708-4332-b903-dd1b9409c756" containerID="7e09973b8f77e441a9073c1cb134cf173c175b5ed4dffbf10b2afb638bc5a8cc" exitCode=0 Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.680125 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm427" event={"ID":"445caea8-7708-4332-b903-dd1b9409c756","Type":"ContainerDied","Data":"7e09973b8f77e441a9073c1cb134cf173c175b5ed4dffbf10b2afb638bc5a8cc"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.680148 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm427" event={"ID":"445caea8-7708-4332-b903-dd1b9409c756","Type":"ContainerStarted","Data":"d492dd272a0ea78b7584406b254f740075de684808d3e9671b85697fd3d104df"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.701729 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:15 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:15 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:15 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.701795 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.708174 4859 generic.go:334] "Generic (PLEG): container finished" podID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerID="aa1281fb40a65cd866f8fe24d8890ff7ffa5b051c5e5343c456be142470b978c" exitCode=0 Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.708236 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnjx9" event={"ID":"0c0ea750-41ef-4b4e-a574-2e50b3563f8b","Type":"ContainerDied","Data":"aa1281fb40a65cd866f8fe24d8890ff7ffa5b051c5e5343c456be142470b978c"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.717627 4859 generic.go:334] "Generic (PLEG): container finished" podID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerID="505e8487ffce8ff8d31d48afeefcbad8fa7caf45acb7205f65fcc3260b6bac6e" exitCode=0 Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.717693 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvpq" event={"ID":"2190970d-eb97-4db5-8cb2-ad14997411ab","Type":"ContainerDied","Data":"505e8487ffce8ff8d31d48afeefcbad8fa7caf45acb7205f65fcc3260b6bac6e"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.723468 4859 generic.go:334] "Generic (PLEG): container finished" podID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerID="5bd5704ae30eaa6bfa0981ece12161659528cb7a69de9b058bbccfa69d89f778" exitCode=0 Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.723532 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx2vc" event={"ID":"a8295b62-6cc7-4fed-985e-268605a7e4f0","Type":"ContainerDied","Data":"5bd5704ae30eaa6bfa0981ece12161659528cb7a69de9b058bbccfa69d89f778"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.723557 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx2vc" event={"ID":"a8295b62-6cc7-4fed-985e-268605a7e4f0","Type":"ContainerStarted","Data":"9a99bf0ec1238990a450d6e64fdfa542ec319b48891496141da77f4fe9be42dd"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.726362 4859 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-20T09:21:15.670103754Z","Handler":null,"Name":""} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.728629 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"467eee28e8787612433185e10a4d88c9a5e0cf8ffafdb6ce93d57957925f95e7"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.736875 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:15 crc kubenswrapper[4859]: E0120 09:21:15.737767 4859 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 09:21:16.237752718 +0000 UTC m=+150.993768884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.738898 4859 generic.go:334] "Generic (PLEG): container finished" podID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerID="3b17f29fcdc8c0cf33dadbbf526184462949af8f482757348e521aefe93aa5a1" exitCode=0 Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.738956 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nklcx" event={"ID":"e1957112-94e7-495e-8d4a-bb9bac57988c","Type":"ContainerDied","Data":"3b17f29fcdc8c0cf33dadbbf526184462949af8f482757348e521aefe93aa5a1"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.744575 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0116f585d7282ef55b96e63fa1649814119087488fa2d3695aa3b341a1abce43"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.745510 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e2024f8c4b463b0a09938b04817e6a880d7ba94dded7a45cc1a12183f7741087"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.747366 4859 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.747391 4859 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.749689 4859 generic.go:334] "Generic (PLEG): container finished" podID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerID="df6fb8771a1826ee187f56161e672b912f545d550de6cb209bb2736ab07736d1" exitCode=0 Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.749955 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvdhk" event={"ID":"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41","Type":"ContainerDied","Data":"df6fb8771a1826ee187f56161e672b912f545d550de6cb209bb2736ab07736d1"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.750017 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvdhk" event={"ID":"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41","Type":"ContainerStarted","Data":"8280ef27db03fa20e66a3d68a6575ad95d8e6b5f4e0b93b414979658b27c84af"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.761589 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" event={"ID":"89c4974e-874d-414c-9a9b-987b6e9c9a5c","Type":"ContainerStarted","Data":"38c99140d039f346ce5773b701fb427e191c34219dac8ea6684460c2bd7fc2e7"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.769251 4859 generic.go:334] "Generic (PLEG): container finished" podID="f6e7bf26-160c-4f98-b533-a9433061df3e" containerID="dda355d60afe64996e5a6c5357d0f984565062f6c111c9e14bb359c77ccc36bb" exitCode=0 Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.769354 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" event={"ID":"f6e7bf26-160c-4f98-b533-a9433061df3e","Type":"ContainerDied","Data":"dda355d60afe64996e5a6c5357d0f984565062f6c111c9e14bb359c77ccc36bb"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.782173 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"91abf9ce0e2366dd9848c0c05e1b541ebf508c64a5b6bc62c17bb18245830508"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.782216 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6390e1522f6ceaed8bdac70daf9e2e6eab6d320de0d9c0db3faf0659a75a711c"} Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.783018 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.807728 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w8pvq" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.808182 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k2vrh" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.841920 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.855063 4859 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.855301 4859 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.920137 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" podStartSLOduration=131.920112682 podStartE2EDuration="2m11.920112682s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:15.892771989 +0000 UTC m=+150.648788175" watchObservedRunningTime="2026-01-20 09:21:15.920112682 +0000 UTC m=+150.676128858" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.924070 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxvwj\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.939084 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hqm8v" podStartSLOduration=131.939065084 podStartE2EDuration="2m11.939065084s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:15.93346356 +0000 UTC m=+150.689479736" watchObservedRunningTime="2026-01-20 09:21:15.939065084 +0000 UTC m=+150.695081260" Jan 20 09:21:15 crc kubenswrapper[4859]: I0120 09:21:15.967534 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.007206 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.020628 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.103415 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.104304 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.107679 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.107892 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.122036 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.174102 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b07feee-3935-43e0-94ff-7128822be66e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3b07feee-3935-43e0-94ff-7128822be66e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.174265 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b07feee-3935-43e0-94ff-7128822be66e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3b07feee-3935-43e0-94ff-7128822be66e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.279809 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b07feee-3935-43e0-94ff-7128822be66e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3b07feee-3935-43e0-94ff-7128822be66e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.279891 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b07feee-3935-43e0-94ff-7128822be66e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3b07feee-3935-43e0-94ff-7128822be66e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.280238 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b07feee-3935-43e0-94ff-7128822be66e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3b07feee-3935-43e0-94ff-7128822be66e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.311182 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b07feee-3935-43e0-94ff-7128822be66e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3b07feee-3935-43e0-94ff-7128822be66e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.362895 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxvwj"] Jan 20 09:21:16 crc kubenswrapper[4859]: W0120 09:21:16.414700 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a7dcf75_d3ac_48f3_a9bb_66b37aa9be96.slice/crio-0cc6c973622d320792dd02208ece4ea50bf328b5c9f0ab5da60b168ead7b61e2 WatchSource:0}: Error finding container 0cc6c973622d320792dd02208ece4ea50bf328b5c9f0ab5da60b168ead7b61e2: Status 404 returned error can't find the container with id 0cc6c973622d320792dd02208ece4ea50bf328b5c9f0ab5da60b168ead7b61e2 Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.433085 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.703863 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:16 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:16 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:16 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.704218 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.820903 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" event={"ID":"89c4974e-874d-414c-9a9b-987b6e9c9a5c","Type":"ContainerStarted","Data":"3472389dfb9e1ca5e726c5b480f91a0e3095dea82fc69def40501c994545de06"} Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.820950 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" event={"ID":"89c4974e-874d-414c-9a9b-987b6e9c9a5c","Type":"ContainerStarted","Data":"cbc04dfa1dc7b4e966db4c2ae309f17babe4d62a0b6c80379dfba541f4f80d11"} Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.828579 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" event={"ID":"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96","Type":"ContainerStarted","Data":"8e1935398314eb518606ac13d1c632abeed0e3592a46af104d06041814e7e41c"} Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.828612 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" event={"ID":"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96","Type":"ContainerStarted","Data":"0cc6c973622d320792dd02208ece4ea50bf328b5c9f0ab5da60b168ead7b61e2"} Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.828629 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.843151 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zbj6d" podStartSLOduration=15.843133583 podStartE2EDuration="15.843133583s" podCreationTimestamp="2026-01-20 09:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:16.842542837 +0000 UTC m=+151.598559023" watchObservedRunningTime="2026-01-20 09:21:16.843133583 +0000 UTC m=+151.599149759" Jan 20 09:21:16 crc kubenswrapper[4859]: I0120 09:21:16.897431 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" podStartSLOduration=132.897409368 podStartE2EDuration="2m12.897409368s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:16.897399908 +0000 UTC m=+151.653416074" watchObservedRunningTime="2026-01-20 09:21:16.897409368 +0000 UTC m=+151.653425544" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.089662 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.221589 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.427905 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6e7bf26-160c-4f98-b533-a9433061df3e-config-volume\") pod \"f6e7bf26-160c-4f98-b533-a9433061df3e\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.427981 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6e7bf26-160c-4f98-b533-a9433061df3e-secret-volume\") pod \"f6e7bf26-160c-4f98-b533-a9433061df3e\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.428003 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqprc\" (UniqueName: \"kubernetes.io/projected/f6e7bf26-160c-4f98-b533-a9433061df3e-kube-api-access-qqprc\") pod \"f6e7bf26-160c-4f98-b533-a9433061df3e\" (UID: \"f6e7bf26-160c-4f98-b533-a9433061df3e\") " Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.430649 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6e7bf26-160c-4f98-b533-a9433061df3e-config-volume" (OuterVolumeSpecName: "config-volume") pod "f6e7bf26-160c-4f98-b533-a9433061df3e" (UID: "f6e7bf26-160c-4f98-b533-a9433061df3e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.434864 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e7bf26-160c-4f98-b533-a9433061df3e-kube-api-access-qqprc" (OuterVolumeSpecName: "kube-api-access-qqprc") pod "f6e7bf26-160c-4f98-b533-a9433061df3e" (UID: "f6e7bf26-160c-4f98-b533-a9433061df3e"). InnerVolumeSpecName "kube-api-access-qqprc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.436920 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e7bf26-160c-4f98-b533-a9433061df3e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f6e7bf26-160c-4f98-b533-a9433061df3e" (UID: "f6e7bf26-160c-4f98-b533-a9433061df3e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.529805 4859 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6e7bf26-160c-4f98-b533-a9433061df3e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.529838 4859 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6e7bf26-160c-4f98-b533-a9433061df3e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.529847 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqprc\" (UniqueName: \"kubernetes.io/projected/f6e7bf26-160c-4f98-b533-a9433061df3e-kube-api-access-qqprc\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.609586 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.705442 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:17 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:17 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:17 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.705898 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.863595 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3b07feee-3935-43e0-94ff-7128822be66e","Type":"ContainerStarted","Data":"b9865629a1a5ed824cd06acba25244a282e8b29bdee2625a1e0f385a8ddfa01e"} Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.863646 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3b07feee-3935-43e0-94ff-7128822be66e","Type":"ContainerStarted","Data":"7d479a52c37739d0cc4a1253aa00752ec1fee3662ff58c60880f936954ef8d84"} Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.869556 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" event={"ID":"f6e7bf26-160c-4f98-b533-a9433061df3e","Type":"ContainerDied","Data":"5b9a4fca4e51288df3fd6046e2e3f426d55d61ab47df94279ae95403e09edf45"} Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.869599 4859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9a4fca4e51288df3fd6046e2e3f426d55d61ab47df94279ae95403e09edf45" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.869675 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-nf2rk" Jan 20 09:21:17 crc kubenswrapper[4859]: I0120 09:21:17.888192 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.8881703060000001 podStartE2EDuration="1.888170306s" podCreationTimestamp="2026-01-20 09:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:17.877117252 +0000 UTC m=+152.633133428" watchObservedRunningTime="2026-01-20 09:21:17.888170306 +0000 UTC m=+152.644186482" Jan 20 09:21:18 crc kubenswrapper[4859]: I0120 09:21:18.702561 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:18 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:18 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:18 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:18 crc kubenswrapper[4859]: I0120 09:21:18.702624 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:18 crc kubenswrapper[4859]: I0120 09:21:18.886736 4859 generic.go:334] "Generic (PLEG): container finished" podID="3b07feee-3935-43e0-94ff-7128822be66e" containerID="b9865629a1a5ed824cd06acba25244a282e8b29bdee2625a1e0f385a8ddfa01e" exitCode=0 Jan 20 09:21:18 crc kubenswrapper[4859]: I0120 09:21:18.886807 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3b07feee-3935-43e0-94ff-7128822be66e","Type":"ContainerDied","Data":"b9865629a1a5ed824cd06acba25244a282e8b29bdee2625a1e0f385a8ddfa01e"} Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.097195 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.097254 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.104251 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.251753 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 09:21:19 crc kubenswrapper[4859]: E0120 09:21:19.252053 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e7bf26-160c-4f98-b533-a9433061df3e" containerName="collect-profiles" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.252072 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e7bf26-160c-4f98-b533-a9433061df3e" containerName="collect-profiles" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.252169 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e7bf26-160c-4f98-b533-a9433061df3e" containerName="collect-profiles" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.252547 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.256957 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.258371 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.261021 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.354974 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ec76a4b7-54ac-4669-b5d1-50cf28be486d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.355056 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ec76a4b7-54ac-4669-b5d1-50cf28be486d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.455855 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ec76a4b7-54ac-4669-b5d1-50cf28be486d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.456248 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ec76a4b7-54ac-4669-b5d1-50cf28be486d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.456472 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ec76a4b7-54ac-4669-b5d1-50cf28be486d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.492073 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ec76a4b7-54ac-4669-b5d1-50cf28be486d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.583580 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.700142 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:19 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:19 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:19 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.700189 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:19 crc kubenswrapper[4859]: I0120 09:21:19.906061 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-k6nx9" Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.103014 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.264480 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.384102 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b07feee-3935-43e0-94ff-7128822be66e-kube-api-access\") pod \"3b07feee-3935-43e0-94ff-7128822be66e\" (UID: \"3b07feee-3935-43e0-94ff-7128822be66e\") " Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.384227 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b07feee-3935-43e0-94ff-7128822be66e-kubelet-dir\") pod \"3b07feee-3935-43e0-94ff-7128822be66e\" (UID: \"3b07feee-3935-43e0-94ff-7128822be66e\") " Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.384544 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b07feee-3935-43e0-94ff-7128822be66e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3b07feee-3935-43e0-94ff-7128822be66e" (UID: "3b07feee-3935-43e0-94ff-7128822be66e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.389697 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b07feee-3935-43e0-94ff-7128822be66e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3b07feee-3935-43e0-94ff-7128822be66e" (UID: "3b07feee-3935-43e0-94ff-7128822be66e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.485563 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b07feee-3935-43e0-94ff-7128822be66e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.485593 4859 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b07feee-3935-43e0-94ff-7128822be66e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.702080 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:20 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:20 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:20 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.702153 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.916122 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ec76a4b7-54ac-4669-b5d1-50cf28be486d","Type":"ContainerStarted","Data":"fce1c1bdc654f724308d09563b37d2f17c4aadccdb68907c0373a6c78c2ceb14"} Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.919106 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.924878 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3b07feee-3935-43e0-94ff-7128822be66e","Type":"ContainerDied","Data":"7d479a52c37739d0cc4a1253aa00752ec1fee3662ff58c60880f936954ef8d84"} Jan 20 09:21:20 crc kubenswrapper[4859]: I0120 09:21:20.924950 4859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d479a52c37739d0cc4a1253aa00752ec1fee3662ff58c60880f936954ef8d84" Jan 20 09:21:21 crc kubenswrapper[4859]: I0120 09:21:21.702237 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:21 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:21 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:21 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:21 crc kubenswrapper[4859]: I0120 09:21:21.702291 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:21 crc kubenswrapper[4859]: I0120 09:21:21.932357 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ec76a4b7-54ac-4669-b5d1-50cf28be486d","Type":"ContainerStarted","Data":"9424dfff47b59ccd5ccd516a4b1855cb1f6576bcf4495db740134f32a75f7ab1"} Jan 20 09:21:21 crc kubenswrapper[4859]: I0120 09:21:21.951461 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.951446367 podStartE2EDuration="2.951446367s" podCreationTimestamp="2026-01-20 09:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:21.948193018 +0000 UTC m=+156.704209194" watchObservedRunningTime="2026-01-20 09:21:21.951446367 +0000 UTC m=+156.707462543" Jan 20 09:21:22 crc kubenswrapper[4859]: I0120 09:21:22.703881 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:22 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:22 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:22 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:22 crc kubenswrapper[4859]: I0120 09:21:22.704150 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:22 crc kubenswrapper[4859]: I0120 09:21:22.940543 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-c4jrg" Jan 20 09:21:22 crc kubenswrapper[4859]: I0120 09:21:22.947443 4859 generic.go:334] "Generic (PLEG): container finished" podID="ec76a4b7-54ac-4669-b5d1-50cf28be486d" containerID="9424dfff47b59ccd5ccd516a4b1855cb1f6576bcf4495db740134f32a75f7ab1" exitCode=0 Jan 20 09:21:22 crc kubenswrapper[4859]: I0120 09:21:22.947494 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ec76a4b7-54ac-4669-b5d1-50cf28be486d","Type":"ContainerDied","Data":"9424dfff47b59ccd5ccd516a4b1855cb1f6576bcf4495db740134f32a75f7ab1"} Jan 20 09:21:23 crc kubenswrapper[4859]: I0120 09:21:23.700629 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:23 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:23 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:23 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:23 crc kubenswrapper[4859]: I0120 09:21:23.700675 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:24 crc kubenswrapper[4859]: I0120 09:21:24.401216 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-nnrsm" Jan 20 09:21:24 crc kubenswrapper[4859]: I0120 09:21:24.597748 4859 patch_prober.go:28] interesting pod/console-f9d7485db-7jqnq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 20 09:21:24 crc kubenswrapper[4859]: I0120 09:21:24.597851 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7jqnq" podUID="bd9a36ff-f9d5-4694-bf93-8762ec135ca8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 20 09:21:24 crc kubenswrapper[4859]: I0120 09:21:24.701693 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:24 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:24 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:24 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:24 crc kubenswrapper[4859]: I0120 09:21:24.701761 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:25 crc kubenswrapper[4859]: I0120 09:21:25.700420 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:25 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:25 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:25 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:25 crc kubenswrapper[4859]: I0120 09:21:25.700468 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:26 crc kubenswrapper[4859]: I0120 09:21:26.700308 4859 patch_prober.go:28] interesting pod/router-default-5444994796-wffjq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:21:26 crc kubenswrapper[4859]: [-]has-synced failed: reason withheld Jan 20 09:21:26 crc kubenswrapper[4859]: [+]process-running ok Jan 20 09:21:26 crc kubenswrapper[4859]: healthz check failed Jan 20 09:21:26 crc kubenswrapper[4859]: I0120 09:21:26.700385 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wffjq" podUID="34034d1a-c537-454b-a196-592ec6f2e43f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:21:26 crc kubenswrapper[4859]: I0120 09:21:26.793980 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:21:26 crc kubenswrapper[4859]: I0120 09:21:26.805959 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c059dec-0bda-4110-9050-7cbba39eb183-metrics-certs\") pod \"network-metrics-daemon-tw45n\" (UID: \"0c059dec-0bda-4110-9050-7cbba39eb183\") " pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:21:26 crc kubenswrapper[4859]: I0120 09:21:26.887936 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tw45n" Jan 20 09:21:27 crc kubenswrapper[4859]: I0120 09:21:27.702012 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:27 crc kubenswrapper[4859]: I0120 09:21:27.704983 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wffjq" Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.670236 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-62vdh"] Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.670827 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" podUID="8bc0a7bf-e710-45d8-90df-576a0cbcf06d" containerName="controller-manager" containerID="cri-o://ceeda78246b1ee976e23477387496689692c2250818d94ef67da19dbe427b959" gracePeriod=30 Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.695949 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z"] Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.696165 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" podUID="99087dbb-8011-483e-87b6-fe5cb4bc203b" containerName="route-controller-manager" containerID="cri-o://bb1e8657fe88d6f9ddaa22907b4aa3954d140f1f10e356187f4330d1643b9da2" gracePeriod=30 Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.700958 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.820444 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kube-api-access\") pod \"ec76a4b7-54ac-4669-b5d1-50cf28be486d\" (UID: \"ec76a4b7-54ac-4669-b5d1-50cf28be486d\") " Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.820577 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kubelet-dir\") pod \"ec76a4b7-54ac-4669-b5d1-50cf28be486d\" (UID: \"ec76a4b7-54ac-4669-b5d1-50cf28be486d\") " Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.820682 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ec76a4b7-54ac-4669-b5d1-50cf28be486d" (UID: "ec76a4b7-54ac-4669-b5d1-50cf28be486d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.820980 4859 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.835962 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ec76a4b7-54ac-4669-b5d1-50cf28be486d" (UID: "ec76a4b7-54ac-4669-b5d1-50cf28be486d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.922814 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec76a4b7-54ac-4669-b5d1-50cf28be486d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.982142 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ec76a4b7-54ac-4669-b5d1-50cf28be486d","Type":"ContainerDied","Data":"fce1c1bdc654f724308d09563b37d2f17c4aadccdb68907c0373a6c78c2ceb14"} Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.982191 4859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fce1c1bdc654f724308d09563b37d2f17c4aadccdb68907c0373a6c78c2ceb14" Jan 20 09:21:28 crc kubenswrapper[4859]: I0120 09:21:28.982206 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 09:21:29 crc kubenswrapper[4859]: I0120 09:21:29.989274 4859 generic.go:334] "Generic (PLEG): container finished" podID="8bc0a7bf-e710-45d8-90df-576a0cbcf06d" containerID="ceeda78246b1ee976e23477387496689692c2250818d94ef67da19dbe427b959" exitCode=0 Jan 20 09:21:29 crc kubenswrapper[4859]: I0120 09:21:29.989360 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" event={"ID":"8bc0a7bf-e710-45d8-90df-576a0cbcf06d","Type":"ContainerDied","Data":"ceeda78246b1ee976e23477387496689692c2250818d94ef67da19dbe427b959"} Jan 20 09:21:29 crc kubenswrapper[4859]: I0120 09:21:29.991386 4859 generic.go:334] "Generic (PLEG): container finished" podID="99087dbb-8011-483e-87b6-fe5cb4bc203b" containerID="bb1e8657fe88d6f9ddaa22907b4aa3954d140f1f10e356187f4330d1643b9da2" exitCode=0 Jan 20 09:21:29 crc kubenswrapper[4859]: I0120 09:21:29.991432 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" event={"ID":"99087dbb-8011-483e-87b6-fe5cb4bc203b","Type":"ContainerDied","Data":"bb1e8657fe88d6f9ddaa22907b4aa3954d140f1f10e356187f4330d1643b9da2"} Jan 20 09:21:34 crc kubenswrapper[4859]: I0120 09:21:34.135831 4859 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9dw2z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 20 09:21:34 crc kubenswrapper[4859]: I0120 09:21:34.136229 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" podUID="99087dbb-8011-483e-87b6-fe5cb4bc203b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 20 09:21:34 crc kubenswrapper[4859]: I0120 09:21:34.189448 4859 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-62vdh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 20 09:21:34 crc kubenswrapper[4859]: I0120 09:21:34.190032 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" podUID="8bc0a7bf-e710-45d8-90df-576a0cbcf06d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 20 09:21:34 crc kubenswrapper[4859]: I0120 09:21:34.719663 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:34 crc kubenswrapper[4859]: I0120 09:21:34.726900 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-7jqnq" Jan 20 09:21:36 crc kubenswrapper[4859]: I0120 09:21:36.014049 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:21:40 crc kubenswrapper[4859]: I0120 09:21:40.048963 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:21:40 crc kubenswrapper[4859]: I0120 09:21:40.049045 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:21:44 crc kubenswrapper[4859]: I0120 09:21:44.791831 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gvtgk" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.135106 4859 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9dw2z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.135456 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" podUID="99087dbb-8011-483e-87b6-fe5cb4bc203b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.167156 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.188692 4859 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-62vdh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.188814 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" podUID="8bc0a7bf-e710-45d8-90df-576a0cbcf06d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.204899 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc"] Jan 20 09:21:45 crc kubenswrapper[4859]: E0120 09:21:45.205124 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b07feee-3935-43e0-94ff-7128822be66e" containerName="pruner" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.205138 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b07feee-3935-43e0-94ff-7128822be66e" containerName="pruner" Jan 20 09:21:45 crc kubenswrapper[4859]: E0120 09:21:45.205156 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec76a4b7-54ac-4669-b5d1-50cf28be486d" containerName="pruner" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.205164 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec76a4b7-54ac-4669-b5d1-50cf28be486d" containerName="pruner" Jan 20 09:21:45 crc kubenswrapper[4859]: E0120 09:21:45.205179 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99087dbb-8011-483e-87b6-fe5cb4bc203b" containerName="route-controller-manager" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.205187 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="99087dbb-8011-483e-87b6-fe5cb4bc203b" containerName="route-controller-manager" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.205292 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="99087dbb-8011-483e-87b6-fe5cb4bc203b" containerName="route-controller-manager" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.205314 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b07feee-3935-43e0-94ff-7128822be66e" containerName="pruner" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.205324 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec76a4b7-54ac-4669-b5d1-50cf28be486d" containerName="pruner" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.205764 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.217671 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc"] Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.256940 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-config\") pod \"99087dbb-8011-483e-87b6-fe5cb4bc203b\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.257016 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-client-ca\") pod \"99087dbb-8011-483e-87b6-fe5cb4bc203b\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.257054 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99087dbb-8011-483e-87b6-fe5cb4bc203b-serving-cert\") pod \"99087dbb-8011-483e-87b6-fe5cb4bc203b\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.257079 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trhwm\" (UniqueName: \"kubernetes.io/projected/99087dbb-8011-483e-87b6-fe5cb4bc203b-kube-api-access-trhwm\") pod \"99087dbb-8011-483e-87b6-fe5cb4bc203b\" (UID: \"99087dbb-8011-483e-87b6-fe5cb4bc203b\") " Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.257298 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-config\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.257346 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b9c138-8d5d-444c-9c87-b0588370e986-serving-cert\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.257363 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d4lx\" (UniqueName: \"kubernetes.io/projected/91b9c138-8d5d-444c-9c87-b0588370e986-kube-api-access-8d4lx\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.257391 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-client-ca\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.259764 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-client-ca" (OuterVolumeSpecName: "client-ca") pod "99087dbb-8011-483e-87b6-fe5cb4bc203b" (UID: "99087dbb-8011-483e-87b6-fe5cb4bc203b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.260177 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-config" (OuterVolumeSpecName: "config") pod "99087dbb-8011-483e-87b6-fe5cb4bc203b" (UID: "99087dbb-8011-483e-87b6-fe5cb4bc203b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.265524 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99087dbb-8011-483e-87b6-fe5cb4bc203b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "99087dbb-8011-483e-87b6-fe5cb4bc203b" (UID: "99087dbb-8011-483e-87b6-fe5cb4bc203b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.267493 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99087dbb-8011-483e-87b6-fe5cb4bc203b-kube-api-access-trhwm" (OuterVolumeSpecName: "kube-api-access-trhwm") pod "99087dbb-8011-483e-87b6-fe5cb4bc203b" (UID: "99087dbb-8011-483e-87b6-fe5cb4bc203b"). InnerVolumeSpecName "kube-api-access-trhwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.358646 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b9c138-8d5d-444c-9c87-b0588370e986-serving-cert\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.358692 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d4lx\" (UniqueName: \"kubernetes.io/projected/91b9c138-8d5d-444c-9c87-b0588370e986-kube-api-access-8d4lx\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.358757 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-client-ca\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.359639 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-client-ca\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.359704 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-config\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.359740 4859 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.360575 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99087dbb-8011-483e-87b6-fe5cb4bc203b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.360588 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trhwm\" (UniqueName: \"kubernetes.io/projected/99087dbb-8011-483e-87b6-fe5cb4bc203b-kube-api-access-trhwm\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.360603 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99087dbb-8011-483e-87b6-fe5cb4bc203b-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.360528 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-config\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.373752 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b9c138-8d5d-444c-9c87-b0588370e986-serving-cert\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.377156 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d4lx\" (UniqueName: \"kubernetes.io/projected/91b9c138-8d5d-444c-9c87-b0588370e986-kube-api-access-8d4lx\") pod \"route-controller-manager-69d68cb874-tzsjc\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:45 crc kubenswrapper[4859]: I0120 09:21:45.527197 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.083769 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" event={"ID":"99087dbb-8011-483e-87b6-fe5cb4bc203b","Type":"ContainerDied","Data":"adb9b50dd5eeffd0e1a17004847a548f698a8061bf47dc1b1359c4783ac6fc04"} Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.084124 4859 scope.go:117] "RemoveContainer" containerID="bb1e8657fe88d6f9ddaa22907b4aa3954d140f1f10e356187f4330d1643b9da2" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.083923 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.124016 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z"] Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.130370 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9dw2z"] Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.491629 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.572102 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-client-ca\") pod \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.572519 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-serving-cert\") pod \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.572555 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-config\") pod \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.572585 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzw62\" (UniqueName: \"kubernetes.io/projected/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-kube-api-access-hzw62\") pod \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.572642 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-proxy-ca-bundles\") pod \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\" (UID: \"8bc0a7bf-e710-45d8-90df-576a0cbcf06d\") " Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.572873 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-client-ca" (OuterVolumeSpecName: "client-ca") pod "8bc0a7bf-e710-45d8-90df-576a0cbcf06d" (UID: "8bc0a7bf-e710-45d8-90df-576a0cbcf06d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.573070 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-config" (OuterVolumeSpecName: "config") pod "8bc0a7bf-e710-45d8-90df-576a0cbcf06d" (UID: "8bc0a7bf-e710-45d8-90df-576a0cbcf06d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.573243 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8bc0a7bf-e710-45d8-90df-576a0cbcf06d" (UID: "8bc0a7bf-e710-45d8-90df-576a0cbcf06d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.579587 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8bc0a7bf-e710-45d8-90df-576a0cbcf06d" (UID: "8bc0a7bf-e710-45d8-90df-576a0cbcf06d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.581080 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-kube-api-access-hzw62" (OuterVolumeSpecName: "kube-api-access-hzw62") pod "8bc0a7bf-e710-45d8-90df-576a0cbcf06d" (UID: "8bc0a7bf-e710-45d8-90df-576a0cbcf06d"). InnerVolumeSpecName "kube-api-access-hzw62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.673339 4859 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.673375 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.673387 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.673396 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzw62\" (UniqueName: \"kubernetes.io/projected/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-kube-api-access-hzw62\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:46 crc kubenswrapper[4859]: I0120 09:21:46.673406 4859 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bc0a7bf-e710-45d8-90df-576a0cbcf06d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:21:46 crc kubenswrapper[4859]: E0120 09:21:46.851265 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 09:21:46 crc kubenswrapper[4859]: E0120 09:21:46.851423 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkkps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wm427_openshift-marketplace(445caea8-7708-4332-b903-dd1b9409c756): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 09:21:46 crc kubenswrapper[4859]: E0120 09:21:46.852616 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wm427" podUID="445caea8-7708-4332-b903-dd1b9409c756" Jan 20 09:21:47 crc kubenswrapper[4859]: I0120 09:21:47.090407 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" Jan 20 09:21:47 crc kubenswrapper[4859]: I0120 09:21:47.092920 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-62vdh" event={"ID":"8bc0a7bf-e710-45d8-90df-576a0cbcf06d","Type":"ContainerDied","Data":"904772a36db62508b522715aa52220f430759c567e5e77cd472d4976a5571f29"} Jan 20 09:21:47 crc kubenswrapper[4859]: I0120 09:21:47.138894 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-62vdh"] Jan 20 09:21:47 crc kubenswrapper[4859]: I0120 09:21:47.141390 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-62vdh"] Jan 20 09:21:47 crc kubenswrapper[4859]: I0120 09:21:47.579362 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bc0a7bf-e710-45d8-90df-576a0cbcf06d" path="/var/lib/kubelet/pods/8bc0a7bf-e710-45d8-90df-576a0cbcf06d/volumes" Jan 20 09:21:47 crc kubenswrapper[4859]: I0120 09:21:47.580365 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99087dbb-8011-483e-87b6-fe5cb4bc203b" path="/var/lib/kubelet/pods/99087dbb-8011-483e-87b6-fe5cb4bc203b/volumes" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.615000 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk"] Jan 20 09:21:48 crc kubenswrapper[4859]: E0120 09:21:48.615211 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc0a7bf-e710-45d8-90df-576a0cbcf06d" containerName="controller-manager" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.615223 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc0a7bf-e710-45d8-90df-576a0cbcf06d" containerName="controller-manager" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.615328 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bc0a7bf-e710-45d8-90df-576a0cbcf06d" containerName="controller-manager" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.615673 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.619377 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.619599 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.619996 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.620113 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.620278 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.620384 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.622747 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.633387 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk"] Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.676489 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc"] Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.691601 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-proxy-ca-bundles\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.691665 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-client-ca\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.691685 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-config\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.691703 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2929543-4f19-42f5-8f1d-851c8d5955c0-serving-cert\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.691734 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drmsb\" (UniqueName: \"kubernetes.io/projected/b2929543-4f19-42f5-8f1d-851c8d5955c0-kube-api-access-drmsb\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.793418 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-proxy-ca-bundles\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.793494 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-client-ca\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.793514 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-config\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.793536 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2929543-4f19-42f5-8f1d-851c8d5955c0-serving-cert\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.793572 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drmsb\" (UniqueName: \"kubernetes.io/projected/b2929543-4f19-42f5-8f1d-851c8d5955c0-kube-api-access-drmsb\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.794683 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-client-ca\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.794875 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-proxy-ca-bundles\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.795127 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-config\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.805684 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2929543-4f19-42f5-8f1d-851c8d5955c0-serving-cert\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.808396 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drmsb\" (UniqueName: \"kubernetes.io/projected/b2929543-4f19-42f5-8f1d-851c8d5955c0-kube-api-access-drmsb\") pod \"controller-manager-6dcc4c8598-mw2wk\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:48 crc kubenswrapper[4859]: I0120 09:21:48.929898 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:49 crc kubenswrapper[4859]: I0120 09:21:49.309441 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7fk4r"] Jan 20 09:21:49 crc kubenswrapper[4859]: I0120 09:21:49.886937 4859 scope.go:117] "RemoveContainer" containerID="ceeda78246b1ee976e23477387496689692c2250818d94ef67da19dbe427b959" Jan 20 09:21:49 crc kubenswrapper[4859]: E0120 09:21:49.889140 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wm427" podUID="445caea8-7708-4332-b903-dd1b9409c756" Jan 20 09:21:50 crc kubenswrapper[4859]: I0120 09:21:50.057054 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tw45n"] Jan 20 09:21:53 crc kubenswrapper[4859]: I0120 09:21:53.907006 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 09:21:53 crc kubenswrapper[4859]: E0120 09:21:53.972352 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 09:21:53 crc kubenswrapper[4859]: E0120 09:21:53.972739 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssgwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jvdhk_openshift-marketplace(73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 09:21:53 crc kubenswrapper[4859]: E0120 09:21:53.973925 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jvdhk" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" Jan 20 09:21:55 crc kubenswrapper[4859]: E0120 09:21:55.425462 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 09:21:55 crc kubenswrapper[4859]: E0120 09:21:55.425821 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h89vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nklcx_openshift-marketplace(e1957112-94e7-495e-8d4a-bb9bac57988c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 09:21:55 crc kubenswrapper[4859]: E0120 09:21:55.426961 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nklcx" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" Jan 20 09:21:55 crc kubenswrapper[4859]: I0120 09:21:55.840677 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 09:21:55 crc kubenswrapper[4859]: I0120 09:21:55.844387 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 09:21:55 crc kubenswrapper[4859]: I0120 09:21:55.846717 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 09:21:55 crc kubenswrapper[4859]: I0120 09:21:55.846831 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 09:21:55 crc kubenswrapper[4859]: I0120 09:21:55.848451 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 09:21:55 crc kubenswrapper[4859]: I0120 09:21:55.888573 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8471c5d0-76cb-41ba-a724-9345834c5efe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8471c5d0-76cb-41ba-a724-9345834c5efe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 09:21:55 crc kubenswrapper[4859]: I0120 09:21:55.888811 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8471c5d0-76cb-41ba-a724-9345834c5efe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8471c5d0-76cb-41ba-a724-9345834c5efe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 09:21:55 crc kubenswrapper[4859]: I0120 09:21:55.989773 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8471c5d0-76cb-41ba-a724-9345834c5efe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8471c5d0-76cb-41ba-a724-9345834c5efe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 09:21:55 crc kubenswrapper[4859]: I0120 09:21:55.989839 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8471c5d0-76cb-41ba-a724-9345834c5efe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8471c5d0-76cb-41ba-a724-9345834c5efe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 09:21:55 crc kubenswrapper[4859]: I0120 09:21:55.989856 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8471c5d0-76cb-41ba-a724-9345834c5efe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8471c5d0-76cb-41ba-a724-9345834c5efe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 09:21:56 crc kubenswrapper[4859]: I0120 09:21:56.006392 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8471c5d0-76cb-41ba-a724-9345834c5efe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8471c5d0-76cb-41ba-a724-9345834c5efe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 09:21:56 crc kubenswrapper[4859]: I0120 09:21:56.167762 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 09:21:56 crc kubenswrapper[4859]: E0120 09:21:56.814269 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nklcx" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" Jan 20 09:21:56 crc kubenswrapper[4859]: E0120 09:21:56.814273 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jvdhk" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" Jan 20 09:21:56 crc kubenswrapper[4859]: W0120 09:21:56.815017 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c059dec_0bda_4110_9050_7cbba39eb183.slice/crio-fb7225767b4da948e79b9e4da15a9e143be37a20d2447936b02fd87a6c1ef9a0 WatchSource:0}: Error finding container fb7225767b4da948e79b9e4da15a9e143be37a20d2447936b02fd87a6c1ef9a0: Status 404 returned error can't find the container with id fb7225767b4da948e79b9e4da15a9e143be37a20d2447936b02fd87a6c1ef9a0 Jan 20 09:21:57 crc kubenswrapper[4859]: I0120 09:21:57.048166 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk"] Jan 20 09:21:57 crc kubenswrapper[4859]: W0120 09:21:57.050969 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2929543_4f19_42f5_8f1d_851c8d5955c0.slice/crio-b1cafbdeb3a49d4fdccfd8de5635b4b01ff78bde2f03d13e32933f4d4e20bb49 WatchSource:0}: Error finding container b1cafbdeb3a49d4fdccfd8de5635b4b01ff78bde2f03d13e32933f4d4e20bb49: Status 404 returned error can't find the container with id b1cafbdeb3a49d4fdccfd8de5635b4b01ff78bde2f03d13e32933f4d4e20bb49 Jan 20 09:21:57 crc kubenswrapper[4859]: I0120 09:21:57.089608 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc"] Jan 20 09:21:57 crc kubenswrapper[4859]: I0120 09:21:57.127479 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 09:21:57 crc kubenswrapper[4859]: I0120 09:21:57.145294 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tw45n" event={"ID":"0c059dec-0bda-4110-9050-7cbba39eb183","Type":"ContainerStarted","Data":"fb7225767b4da948e79b9e4da15a9e143be37a20d2447936b02fd87a6c1ef9a0"} Jan 20 09:21:57 crc kubenswrapper[4859]: I0120 09:21:57.146546 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" event={"ID":"b2929543-4f19-42f5-8f1d-851c8d5955c0","Type":"ContainerStarted","Data":"b1cafbdeb3a49d4fdccfd8de5635b4b01ff78bde2f03d13e32933f4d4e20bb49"} Jan 20 09:21:57 crc kubenswrapper[4859]: I0120 09:21:57.148621 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" event={"ID":"91b9c138-8d5d-444c-9c87-b0588370e986","Type":"ContainerStarted","Data":"71472cbc86db6910b657ba8389201f19cb916c6dbe92c10c8c418949a0b6b863"} Jan 20 09:21:57 crc kubenswrapper[4859]: I0120 09:21:57.151270 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvpq" event={"ID":"2190970d-eb97-4db5-8cb2-ad14997411ab","Type":"ContainerStarted","Data":"e2c3c5be311010aa13a2b148dfa9381206beb074fb23d0b59e457bfdd902a3a1"} Jan 20 09:21:57 crc kubenswrapper[4859]: W0120 09:21:57.153654 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8471c5d0_76cb_41ba_a724_9345834c5efe.slice/crio-cd442ad1f3ad59e5bd5b198ae8b586dc06aefd2db079f5c42d341a4e43e753af WatchSource:0}: Error finding container cd442ad1f3ad59e5bd5b198ae8b586dc06aefd2db079f5c42d341a4e43e753af: Status 404 returned error can't find the container with id cd442ad1f3ad59e5bd5b198ae8b586dc06aefd2db079f5c42d341a4e43e753af Jan 20 09:21:57 crc kubenswrapper[4859]: E0120 09:21:57.254025 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 09:21:57 crc kubenswrapper[4859]: E0120 09:21:57.254152 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzs94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xnjx9_openshift-marketplace(0c0ea750-41ef-4b4e-a574-2e50b3563f8b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 09:21:57 crc kubenswrapper[4859]: E0120 09:21:57.255429 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xnjx9" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" Jan 20 09:21:58 crc kubenswrapper[4859]: I0120 09:21:58.163056 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8471c5d0-76cb-41ba-a724-9345834c5efe","Type":"ContainerStarted","Data":"cd442ad1f3ad59e5bd5b198ae8b586dc06aefd2db079f5c42d341a4e43e753af"} Jan 20 09:21:58 crc kubenswrapper[4859]: I0120 09:21:58.166190 4859 generic.go:334] "Generic (PLEG): container finished" podID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerID="e2c3c5be311010aa13a2b148dfa9381206beb074fb23d0b59e457bfdd902a3a1" exitCode=0 Jan 20 09:21:58 crc kubenswrapper[4859]: I0120 09:21:58.166305 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvpq" event={"ID":"2190970d-eb97-4db5-8cb2-ad14997411ab","Type":"ContainerDied","Data":"e2c3c5be311010aa13a2b148dfa9381206beb074fb23d0b59e457bfdd902a3a1"} Jan 20 09:21:58 crc kubenswrapper[4859]: E0120 09:21:58.167994 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xnjx9" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" Jan 20 09:21:58 crc kubenswrapper[4859]: E0120 09:21:58.698305 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 09:21:58 crc kubenswrapper[4859]: E0120 09:21:58.698657 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggch5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2cm2d_openshift-marketplace(662d5810-d101-40f8-9cf9-6e46d3177b6a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 09:21:58 crc kubenswrapper[4859]: E0120 09:21:58.699864 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2cm2d" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" Jan 20 09:21:58 crc kubenswrapper[4859]: E0120 09:21:58.865362 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 09:21:58 crc kubenswrapper[4859]: E0120 09:21:58.865508 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26g24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx2vc_openshift-marketplace(a8295b62-6cc7-4fed-985e-268605a7e4f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 09:21:58 crc kubenswrapper[4859]: E0120 09:21:58.866694 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sx2vc" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.178429 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8471c5d0-76cb-41ba-a724-9345834c5efe","Type":"ContainerStarted","Data":"b83c86e798e88fb7723d1d9dc9240eaed4e765d64437fc1635437f8dddb79171"} Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.184177 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" event={"ID":"b2929543-4f19-42f5-8f1d-851c8d5955c0","Type":"ContainerStarted","Data":"f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae"} Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.184834 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.194193 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" podUID="91b9c138-8d5d-444c-9c87-b0588370e986" containerName="route-controller-manager" containerID="cri-o://7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287" gracePeriod=30 Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.194494 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" event={"ID":"91b9c138-8d5d-444c-9c87-b0588370e986","Type":"ContainerStarted","Data":"7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287"} Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.194774 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.195422 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.195406875 podStartE2EDuration="4.195406875s" podCreationTimestamp="2026-01-20 09:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:59.193645036 +0000 UTC m=+193.949661212" watchObservedRunningTime="2026-01-20 09:21:59.195406875 +0000 UTC m=+193.951423051" Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.198829 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.201626 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tw45n" event={"ID":"0c059dec-0bda-4110-9050-7cbba39eb183","Type":"ContainerStarted","Data":"1c2a08d6d009b338f7ea12707d33f5eb797a892114df2d0be06aa6719916db56"} Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.201667 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tw45n" event={"ID":"0c059dec-0bda-4110-9050-7cbba39eb183","Type":"ContainerStarted","Data":"6786a002c55304032ee0219d623eceb85a522f480365927649e8aa770439f569"} Jan 20 09:21:59 crc kubenswrapper[4859]: E0120 09:21:59.202942 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sx2vc" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" Jan 20 09:21:59 crc kubenswrapper[4859]: E0120 09:21:59.203346 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2cm2d" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.212821 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.224164 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" podStartSLOduration=11.224145066 podStartE2EDuration="11.224145066s" podCreationTimestamp="2026-01-20 09:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:59.217580706 +0000 UTC m=+193.973596882" watchObservedRunningTime="2026-01-20 09:21:59.224145066 +0000 UTC m=+193.980161242" Jan 20 09:21:59 crc kubenswrapper[4859]: I0120 09:21:59.271161 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" podStartSLOduration=31.271144861 podStartE2EDuration="31.271144861s" podCreationTimestamp="2026-01-20 09:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:21:59.270053121 +0000 UTC m=+194.026069297" watchObservedRunningTime="2026-01-20 09:21:59.271144861 +0000 UTC m=+194.027161037" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.093440 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.121701 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp"] Jan 20 09:22:00 crc kubenswrapper[4859]: E0120 09:22:00.121940 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b9c138-8d5d-444c-9c87-b0588370e986" containerName="route-controller-manager" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.121952 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b9c138-8d5d-444c-9c87-b0588370e986" containerName="route-controller-manager" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.122063 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b9c138-8d5d-444c-9c87-b0588370e986" containerName="route-controller-manager" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.122402 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.136462 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp"] Jan 20 09:22:00 crc kubenswrapper[4859]: E0120 09:22:00.137342 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 09:22:00 crc kubenswrapper[4859]: E0120 09:22:00.137521 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjw2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9sm4f_openshift-marketplace(cf69514d-13ef-4ba7-9a8a-1d2656df59fb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 09:22:00 crc kubenswrapper[4859]: E0120 09:22:00.138640 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9sm4f" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.160321 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d4lx\" (UniqueName: \"kubernetes.io/projected/91b9c138-8d5d-444c-9c87-b0588370e986-kube-api-access-8d4lx\") pod \"91b9c138-8d5d-444c-9c87-b0588370e986\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.160455 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-config\") pod \"91b9c138-8d5d-444c-9c87-b0588370e986\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.160528 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-client-ca\") pod \"91b9c138-8d5d-444c-9c87-b0588370e986\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.160626 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b9c138-8d5d-444c-9c87-b0588370e986-serving-cert\") pod \"91b9c138-8d5d-444c-9c87-b0588370e986\" (UID: \"91b9c138-8d5d-444c-9c87-b0588370e986\") " Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.160920 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a86e29f-e02a-403e-88f7-031315bf5f49-serving-cert\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.160984 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sp5f\" (UniqueName: \"kubernetes.io/projected/5a86e29f-e02a-403e-88f7-031315bf5f49-kube-api-access-9sp5f\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.161062 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-client-ca\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.161094 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-config\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.161248 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-config" (OuterVolumeSpecName: "config") pod "91b9c138-8d5d-444c-9c87-b0588370e986" (UID: "91b9c138-8d5d-444c-9c87-b0588370e986"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.161597 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-client-ca" (OuterVolumeSpecName: "client-ca") pod "91b9c138-8d5d-444c-9c87-b0588370e986" (UID: "91b9c138-8d5d-444c-9c87-b0588370e986"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.172613 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b9c138-8d5d-444c-9c87-b0588370e986-kube-api-access-8d4lx" (OuterVolumeSpecName: "kube-api-access-8d4lx") pod "91b9c138-8d5d-444c-9c87-b0588370e986" (UID: "91b9c138-8d5d-444c-9c87-b0588370e986"). InnerVolumeSpecName "kube-api-access-8d4lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.173990 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b9c138-8d5d-444c-9c87-b0588370e986-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "91b9c138-8d5d-444c-9c87-b0588370e986" (UID: "91b9c138-8d5d-444c-9c87-b0588370e986"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.206646 4859 generic.go:334] "Generic (PLEG): container finished" podID="8471c5d0-76cb-41ba-a724-9345834c5efe" containerID="b83c86e798e88fb7723d1d9dc9240eaed4e765d64437fc1635437f8dddb79171" exitCode=0 Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.206716 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8471c5d0-76cb-41ba-a724-9345834c5efe","Type":"ContainerDied","Data":"b83c86e798e88fb7723d1d9dc9240eaed4e765d64437fc1635437f8dddb79171"} Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.207545 4859 generic.go:334] "Generic (PLEG): container finished" podID="91b9c138-8d5d-444c-9c87-b0588370e986" containerID="7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287" exitCode=0 Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.207588 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.207645 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" event={"ID":"91b9c138-8d5d-444c-9c87-b0588370e986","Type":"ContainerDied","Data":"7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287"} Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.207669 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc" event={"ID":"91b9c138-8d5d-444c-9c87-b0588370e986","Type":"ContainerDied","Data":"71472cbc86db6910b657ba8389201f19cb916c6dbe92c10c8c418949a0b6b863"} Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.207687 4859 scope.go:117] "RemoveContainer" containerID="7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287" Jan 20 09:22:00 crc kubenswrapper[4859]: E0120 09:22:00.209870 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9sm4f" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.238534 4859 scope.go:117] "RemoveContainer" containerID="7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287" Jan 20 09:22:00 crc kubenswrapper[4859]: E0120 09:22:00.240087 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287\": container with ID starting with 7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287 not found: ID does not exist" containerID="7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.240136 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287"} err="failed to get container status \"7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287\": rpc error: code = NotFound desc = could not find container \"7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287\": container with ID starting with 7ed8779046c68efec02d289a54aee9af2833a3b55ade1dc590b1404bb0516287 not found: ID does not exist" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.262425 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sp5f\" (UniqueName: \"kubernetes.io/projected/5a86e29f-e02a-403e-88f7-031315bf5f49-kube-api-access-9sp5f\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.262479 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-client-ca\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.262521 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-config\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.262634 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a86e29f-e02a-403e-88f7-031315bf5f49-serving-cert\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.262673 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91b9c138-8d5d-444c-9c87-b0588370e986-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.262684 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d4lx\" (UniqueName: \"kubernetes.io/projected/91b9c138-8d5d-444c-9c87-b0588370e986-kube-api-access-8d4lx\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.262694 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.262702 4859 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91b9c138-8d5d-444c-9c87-b0588370e986-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.263583 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-client-ca\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.264222 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-config\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.266311 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tw45n" podStartSLOduration=176.26629854 podStartE2EDuration="2m56.26629854s" podCreationTimestamp="2026-01-20 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:22:00.258595917 +0000 UTC m=+195.014612103" watchObservedRunningTime="2026-01-20 09:22:00.26629854 +0000 UTC m=+195.022314716" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.266584 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a86e29f-e02a-403e-88f7-031315bf5f49-serving-cert\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.277911 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sp5f\" (UniqueName: \"kubernetes.io/projected/5a86e29f-e02a-403e-88f7-031315bf5f49-kube-api-access-9sp5f\") pod \"route-controller-manager-64cd8dcbbb-9twkp\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.278195 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc"] Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.281187 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d68cb874-tzsjc"] Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.461140 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:00 crc kubenswrapper[4859]: I0120 09:22:00.641176 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp"] Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.213931 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvpq" event={"ID":"2190970d-eb97-4db5-8cb2-ad14997411ab","Type":"ContainerStarted","Data":"1bc7b8287fd2d00d2b9fbf636f2de3c7043f3095dac2899d129c30b906938acb"} Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.216113 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" event={"ID":"5a86e29f-e02a-403e-88f7-031315bf5f49","Type":"ContainerStarted","Data":"ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11"} Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.216142 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" event={"ID":"5a86e29f-e02a-403e-88f7-031315bf5f49","Type":"ContainerStarted","Data":"dac3f9d7251c7c5b2deec290ac0e9c8cbff33750e01f1ca343e454fa8f14daf5"} Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.233347 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qcvpq" podStartSLOduration=4.872063399 podStartE2EDuration="49.233328623s" podCreationTimestamp="2026-01-20 09:21:12 +0000 UTC" firstStartedPulling="2026-01-20 09:21:15.719909616 +0000 UTC m=+150.475925792" lastFinishedPulling="2026-01-20 09:22:00.08117484 +0000 UTC m=+194.837191016" observedRunningTime="2026-01-20 09:22:01.231271447 +0000 UTC m=+195.987287623" watchObservedRunningTime="2026-01-20 09:22:01.233328623 +0000 UTC m=+195.989344799" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.489982 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.579124 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8471c5d0-76cb-41ba-a724-9345834c5efe-kube-api-access\") pod \"8471c5d0-76cb-41ba-a724-9345834c5efe\" (UID: \"8471c5d0-76cb-41ba-a724-9345834c5efe\") " Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.579168 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8471c5d0-76cb-41ba-a724-9345834c5efe-kubelet-dir\") pod \"8471c5d0-76cb-41ba-a724-9345834c5efe\" (UID: \"8471c5d0-76cb-41ba-a724-9345834c5efe\") " Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.579348 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8471c5d0-76cb-41ba-a724-9345834c5efe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8471c5d0-76cb-41ba-a724-9345834c5efe" (UID: "8471c5d0-76cb-41ba-a724-9345834c5efe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.579702 4859 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8471c5d0-76cb-41ba-a724-9345834c5efe-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.579935 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b9c138-8d5d-444c-9c87-b0588370e986" path="/var/lib/kubelet/pods/91b9c138-8d5d-444c-9c87-b0588370e986/volumes" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.587934 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8471c5d0-76cb-41ba-a724-9345834c5efe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8471c5d0-76cb-41ba-a724-9345834c5efe" (UID: "8471c5d0-76cb-41ba-a724-9345834c5efe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.637639 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 09:22:01 crc kubenswrapper[4859]: E0120 09:22:01.637845 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8471c5d0-76cb-41ba-a724-9345834c5efe" containerName="pruner" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.637857 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="8471c5d0-76cb-41ba-a724-9345834c5efe" containerName="pruner" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.637957 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="8471c5d0-76cb-41ba-a724-9345834c5efe" containerName="pruner" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.638342 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.653742 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.680599 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kube-api-access\") pod \"installer-9-crc\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.680708 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-var-lock\") pod \"installer-9-crc\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.680751 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.681133 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8471c5d0-76cb-41ba-a724-9345834c5efe-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.782933 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kube-api-access\") pod \"installer-9-crc\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.783005 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-var-lock\") pod \"installer-9-crc\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.783051 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.783128 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.783171 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-var-lock\") pod \"installer-9-crc\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.802243 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kube-api-access\") pod \"installer-9-crc\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:01 crc kubenswrapper[4859]: I0120 09:22:01.961414 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:02 crc kubenswrapper[4859]: I0120 09:22:02.226117 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8471c5d0-76cb-41ba-a724-9345834c5efe","Type":"ContainerDied","Data":"cd442ad1f3ad59e5bd5b198ae8b586dc06aefd2db079f5c42d341a4e43e753af"} Jan 20 09:22:02 crc kubenswrapper[4859]: I0120 09:22:02.226176 4859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd442ad1f3ad59e5bd5b198ae8b586dc06aefd2db079f5c42d341a4e43e753af" Jan 20 09:22:02 crc kubenswrapper[4859]: I0120 09:22:02.226276 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 09:22:02 crc kubenswrapper[4859]: I0120 09:22:02.226415 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:02 crc kubenswrapper[4859]: I0120 09:22:02.238503 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:02 crc kubenswrapper[4859]: I0120 09:22:02.252666 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" podStartSLOduration=14.252646257 podStartE2EDuration="14.252646257s" podCreationTimestamp="2026-01-20 09:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:22:02.251454525 +0000 UTC m=+197.007470731" watchObservedRunningTime="2026-01-20 09:22:02.252646257 +0000 UTC m=+197.008662443" Jan 20 09:22:02 crc kubenswrapper[4859]: I0120 09:22:02.376312 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 09:22:03 crc kubenswrapper[4859]: I0120 09:22:03.214733 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:22:03 crc kubenswrapper[4859]: I0120 09:22:03.214825 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:22:03 crc kubenswrapper[4859]: I0120 09:22:03.234844 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785","Type":"ContainerStarted","Data":"b66f1223df404aa91b469b818e9975ba9bce3bbb16f7217435bfe15a36d2e06d"} Jan 20 09:22:04 crc kubenswrapper[4859]: I0120 09:22:04.895057 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:22:05 crc kubenswrapper[4859]: I0120 09:22:05.249488 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785","Type":"ContainerStarted","Data":"8b89cd8b04557f694a8a312e779442c7a8883ed2fe976997110444d0c6d75ed5"} Jan 20 09:22:05 crc kubenswrapper[4859]: I0120 09:22:05.282556 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.282491754 podStartE2EDuration="4.282491754s" podCreationTimestamp="2026-01-20 09:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:22:05.278961151 +0000 UTC m=+200.034977327" watchObservedRunningTime="2026-01-20 09:22:05.282491754 +0000 UTC m=+200.038507960" Jan 20 09:22:08 crc kubenswrapper[4859]: I0120 09:22:08.489462 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm427" event={"ID":"445caea8-7708-4332-b903-dd1b9409c756","Type":"ContainerStarted","Data":"c20b936cbc71869de3241a98baa45c81d8917058f57c3c4acda300659cb91ea2"} Jan 20 09:22:08 crc kubenswrapper[4859]: I0120 09:22:08.603715 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk"] Jan 20 09:22:08 crc kubenswrapper[4859]: I0120 09:22:08.604032 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" podUID="b2929543-4f19-42f5-8f1d-851c8d5955c0" containerName="controller-manager" containerID="cri-o://f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae" gracePeriod=30 Jan 20 09:22:08 crc kubenswrapper[4859]: I0120 09:22:08.637509 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp"] Jan 20 09:22:08 crc kubenswrapper[4859]: I0120 09:22:08.638270 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" podUID="5a86e29f-e02a-403e-88f7-031315bf5f49" containerName="route-controller-manager" containerID="cri-o://ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11" gracePeriod=30 Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.154507 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.206342 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a86e29f-e02a-403e-88f7-031315bf5f49-serving-cert\") pod \"5a86e29f-e02a-403e-88f7-031315bf5f49\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.206433 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-config\") pod \"5a86e29f-e02a-403e-88f7-031315bf5f49\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.206474 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sp5f\" (UniqueName: \"kubernetes.io/projected/5a86e29f-e02a-403e-88f7-031315bf5f49-kube-api-access-9sp5f\") pod \"5a86e29f-e02a-403e-88f7-031315bf5f49\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.207633 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-config" (OuterVolumeSpecName: "config") pod "5a86e29f-e02a-403e-88f7-031315bf5f49" (UID: "5a86e29f-e02a-403e-88f7-031315bf5f49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.207712 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-client-ca\") pod \"5a86e29f-e02a-403e-88f7-031315bf5f49\" (UID: \"5a86e29f-e02a-403e-88f7-031315bf5f49\") " Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.208458 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.208682 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-client-ca" (OuterVolumeSpecName: "client-ca") pod "5a86e29f-e02a-403e-88f7-031315bf5f49" (UID: "5a86e29f-e02a-403e-88f7-031315bf5f49"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.212731 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a86e29f-e02a-403e-88f7-031315bf5f49-kube-api-access-9sp5f" (OuterVolumeSpecName: "kube-api-access-9sp5f") pod "5a86e29f-e02a-403e-88f7-031315bf5f49" (UID: "5a86e29f-e02a-403e-88f7-031315bf5f49"). InnerVolumeSpecName "kube-api-access-9sp5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.213367 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a86e29f-e02a-403e-88f7-031315bf5f49-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5a86e29f-e02a-403e-88f7-031315bf5f49" (UID: "5a86e29f-e02a-403e-88f7-031315bf5f49"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.237886 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.309813 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drmsb\" (UniqueName: \"kubernetes.io/projected/b2929543-4f19-42f5-8f1d-851c8d5955c0-kube-api-access-drmsb\") pod \"b2929543-4f19-42f5-8f1d-851c8d5955c0\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.310207 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-config\") pod \"b2929543-4f19-42f5-8f1d-851c8d5955c0\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.310265 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-client-ca\") pod \"b2929543-4f19-42f5-8f1d-851c8d5955c0\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.310304 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2929543-4f19-42f5-8f1d-851c8d5955c0-serving-cert\") pod \"b2929543-4f19-42f5-8f1d-851c8d5955c0\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.310336 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-proxy-ca-bundles\") pod \"b2929543-4f19-42f5-8f1d-851c8d5955c0\" (UID: \"b2929543-4f19-42f5-8f1d-851c8d5955c0\") " Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.310617 4859 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a86e29f-e02a-403e-88f7-031315bf5f49-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.310635 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a86e29f-e02a-403e-88f7-031315bf5f49-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.310647 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sp5f\" (UniqueName: \"kubernetes.io/projected/5a86e29f-e02a-403e-88f7-031315bf5f49-kube-api-access-9sp5f\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.311234 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b2929543-4f19-42f5-8f1d-851c8d5955c0" (UID: "b2929543-4f19-42f5-8f1d-851c8d5955c0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.311269 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-config" (OuterVolumeSpecName: "config") pod "b2929543-4f19-42f5-8f1d-851c8d5955c0" (UID: "b2929543-4f19-42f5-8f1d-851c8d5955c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.311662 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-client-ca" (OuterVolumeSpecName: "client-ca") pod "b2929543-4f19-42f5-8f1d-851c8d5955c0" (UID: "b2929543-4f19-42f5-8f1d-851c8d5955c0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.315020 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2929543-4f19-42f5-8f1d-851c8d5955c0-kube-api-access-drmsb" (OuterVolumeSpecName: "kube-api-access-drmsb") pod "b2929543-4f19-42f5-8f1d-851c8d5955c0" (UID: "b2929543-4f19-42f5-8f1d-851c8d5955c0"). InnerVolumeSpecName "kube-api-access-drmsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.316315 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2929543-4f19-42f5-8f1d-851c8d5955c0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b2929543-4f19-42f5-8f1d-851c8d5955c0" (UID: "b2929543-4f19-42f5-8f1d-851c8d5955c0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.412400 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2929543-4f19-42f5-8f1d-851c8d5955c0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.412447 4859 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.412467 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drmsb\" (UniqueName: \"kubernetes.io/projected/b2929543-4f19-42f5-8f1d-851c8d5955c0-kube-api-access-drmsb\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.412486 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.412503 4859 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2929543-4f19-42f5-8f1d-851c8d5955c0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.503661 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.503857 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" event={"ID":"b2929543-4f19-42f5-8f1d-851c8d5955c0","Type":"ContainerDied","Data":"f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae"} Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.503585 4859 generic.go:334] "Generic (PLEG): container finished" podID="b2929543-4f19-42f5-8f1d-851c8d5955c0" containerID="f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae" exitCode=0 Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.504120 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" event={"ID":"b2929543-4f19-42f5-8f1d-851c8d5955c0","Type":"ContainerDied","Data":"b1cafbdeb3a49d4fdccfd8de5635b4b01ff78bde2f03d13e32933f4d4e20bb49"} Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.503989 4859 scope.go:117] "RemoveContainer" containerID="f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.509683 4859 generic.go:334] "Generic (PLEG): container finished" podID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerID="ff40a2e5888fdd6155eb3bfd87727ce27acc71996cfe8970403413b3772f934b" exitCode=0 Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.509811 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nklcx" event={"ID":"e1957112-94e7-495e-8d4a-bb9bac57988c","Type":"ContainerDied","Data":"ff40a2e5888fdd6155eb3bfd87727ce27acc71996cfe8970403413b3772f934b"} Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.513954 4859 generic.go:334] "Generic (PLEG): container finished" podID="445caea8-7708-4332-b903-dd1b9409c756" containerID="c20b936cbc71869de3241a98baa45c81d8917058f57c3c4acda300659cb91ea2" exitCode=0 Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.514027 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm427" event={"ID":"445caea8-7708-4332-b903-dd1b9409c756","Type":"ContainerDied","Data":"c20b936cbc71869de3241a98baa45c81d8917058f57c3c4acda300659cb91ea2"} Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.520097 4859 generic.go:334] "Generic (PLEG): container finished" podID="5a86e29f-e02a-403e-88f7-031315bf5f49" containerID="ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11" exitCode=0 Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.520160 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" event={"ID":"5a86e29f-e02a-403e-88f7-031315bf5f49","Type":"ContainerDied","Data":"ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11"} Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.520205 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" event={"ID":"5a86e29f-e02a-403e-88f7-031315bf5f49","Type":"ContainerDied","Data":"dac3f9d7251c7c5b2deec290ac0e9c8cbff33750e01f1ca343e454fa8f14daf5"} Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.520262 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.555598 4859 scope.go:117] "RemoveContainer" containerID="f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae" Jan 20 09:22:09 crc kubenswrapper[4859]: E0120 09:22:09.559093 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae\": container with ID starting with f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae not found: ID does not exist" containerID="f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.559163 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae"} err="failed to get container status \"f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae\": rpc error: code = NotFound desc = could not find container \"f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae\": container with ID starting with f39981f17710830f941862ae02b496be3e49ae799ecb1a8a56f847dbddf82bae not found: ID does not exist" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.559212 4859 scope.go:117] "RemoveContainer" containerID="ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.592881 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk"] Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.592988 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk"] Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.613719 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp"] Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.621101 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64cd8dcbbb-9twkp"] Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.630064 4859 scope.go:117] "RemoveContainer" containerID="ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11" Jan 20 09:22:09 crc kubenswrapper[4859]: E0120 09:22:09.630842 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11\": container with ID starting with ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11 not found: ID does not exist" containerID="ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.630877 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11"} err="failed to get container status \"ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11\": rpc error: code = NotFound desc = could not find container \"ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11\": container with ID starting with ed75762015ff2d085c750c06613fb1fd86f984ee6ac4dc11d7970c912af63f11 not found: ID does not exist" Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.931151 4859 patch_prober.go:28] interesting pod/controller-manager-6dcc4c8598-mw2wk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 09:22:09 crc kubenswrapper[4859]: I0120 09:22:09.931351 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dcc4c8598-mw2wk" podUID="b2929543-4f19-42f5-8f1d-851c8d5955c0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.048258 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.049663 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.049959 4859 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.056737 4859 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845"} pod="openshift-machine-config-operator/machine-config-daemon-knvgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.057610 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" containerID="cri-o://d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845" gracePeriod=600 Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.526300 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt"] Jan 20 09:22:10 crc kubenswrapper[4859]: E0120 09:22:10.526815 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a86e29f-e02a-403e-88f7-031315bf5f49" containerName="route-controller-manager" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.526832 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a86e29f-e02a-403e-88f7-031315bf5f49" containerName="route-controller-manager" Jan 20 09:22:10 crc kubenswrapper[4859]: E0120 09:22:10.526843 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2929543-4f19-42f5-8f1d-851c8d5955c0" containerName="controller-manager" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.526852 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2929543-4f19-42f5-8f1d-851c8d5955c0" containerName="controller-manager" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.526976 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2929543-4f19-42f5-8f1d-851c8d5955c0" containerName="controller-manager" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.526987 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a86e29f-e02a-403e-88f7-031315bf5f49" containerName="route-controller-manager" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.527375 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.529152 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-547db796ff-v7d9d"] Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.529564 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.529695 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.529717 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.530361 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.530434 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.530268 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.530622 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.533470 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.533519 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.534128 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.534280 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.535852 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.535871 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.537700 4859 generic.go:334] "Generic (PLEG): container finished" podID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerID="d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845" exitCode=0 Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.537768 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerDied","Data":"d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845"} Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.538041 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerStarted","Data":"1f87b07074dc2de3606419bc083b72c6305e6f872bdad3c5497ae793d5e33db4"} Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.539627 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvdhk" event={"ID":"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41","Type":"ContainerStarted","Data":"52b23cd5ec33c131ec27c9fedcb3c1f02048e5a822e25939c07fee71b2e151be"} Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.541076 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nklcx" event={"ID":"e1957112-94e7-495e-8d4a-bb9bac57988c","Type":"ContainerStarted","Data":"49f641fbf5a65b04d3304f0b2e49992846db99d50d27ffb1359db1887bd53d11"} Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.543392 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.545563 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm427" event={"ID":"445caea8-7708-4332-b903-dd1b9409c756","Type":"ContainerStarted","Data":"523fcdb9eeb6074262461055eb869899dfd78916d7dcddb032b316544c654e6b"} Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.571898 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-547db796ff-v7d9d"] Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.591144 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wm427" podStartSLOduration=3.224055236 podStartE2EDuration="57.591127767s" podCreationTimestamp="2026-01-20 09:21:13 +0000 UTC" firstStartedPulling="2026-01-20 09:21:15.681525539 +0000 UTC m=+150.437541725" lastFinishedPulling="2026-01-20 09:22:10.04859804 +0000 UTC m=+204.804614256" observedRunningTime="2026-01-20 09:22:10.581266715 +0000 UTC m=+205.337282891" watchObservedRunningTime="2026-01-20 09:22:10.591127767 +0000 UTC m=+205.347143943" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.598871 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt"] Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.622290 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nklcx" podStartSLOduration=5.42502822 podStartE2EDuration="59.622274616s" podCreationTimestamp="2026-01-20 09:21:11 +0000 UTC" firstStartedPulling="2026-01-20 09:21:15.742629472 +0000 UTC m=+150.498645648" lastFinishedPulling="2026-01-20 09:22:09.939875828 +0000 UTC m=+204.695892044" observedRunningTime="2026-01-20 09:22:10.621513826 +0000 UTC m=+205.377530012" watchObservedRunningTime="2026-01-20 09:22:10.622274616 +0000 UTC m=+205.378290792" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.628506 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-config\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.629221 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-client-ca\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.629264 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f91229d3-0e84-49af-a380-b09b7aea6ecd-serving-cert\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.629299 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jhj\" (UniqueName: \"kubernetes.io/projected/f91229d3-0e84-49af-a380-b09b7aea6ecd-kube-api-access-z9jhj\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.730948 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-serving-cert\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.731120 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-client-ca\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.731593 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-client-ca\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.731947 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f91229d3-0e84-49af-a380-b09b7aea6ecd-serving-cert\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.732353 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-client-ca\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.732771 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9jhj\" (UniqueName: \"kubernetes.io/projected/f91229d3-0e84-49af-a380-b09b7aea6ecd-kube-api-access-z9jhj\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.732834 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-proxy-ca-bundles\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.732860 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trs84\" (UniqueName: \"kubernetes.io/projected/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-kube-api-access-trs84\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.732920 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-config\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.732951 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-config\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.733705 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-config\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.748697 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9jhj\" (UniqueName: \"kubernetes.io/projected/f91229d3-0e84-49af-a380-b09b7aea6ecd-kube-api-access-z9jhj\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.752295 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f91229d3-0e84-49af-a380-b09b7aea6ecd-serving-cert\") pod \"route-controller-manager-66774c6f5d-dn2gt\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.833922 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-config\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.834374 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-serving-cert\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.834451 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-client-ca\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.834535 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-proxy-ca-bundles\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.834576 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trs84\" (UniqueName: \"kubernetes.io/projected/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-kube-api-access-trs84\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.835692 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-config\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.836094 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-client-ca\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.837439 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-proxy-ca-bundles\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.839326 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-serving-cert\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.856194 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trs84\" (UniqueName: \"kubernetes.io/projected/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-kube-api-access-trs84\") pod \"controller-manager-547db796ff-v7d9d\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.861536 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:10 crc kubenswrapper[4859]: I0120 09:22:10.871099 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.319798 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt"] Jan 20 09:22:11 crc kubenswrapper[4859]: W0120 09:22:11.328046 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf91229d3_0e84_49af_a380_b09b7aea6ecd.slice/crio-07a9edd58a8ed6e35967546101b9d9897d1d1fb2bfb55a420e863e604831e638 WatchSource:0}: Error finding container 07a9edd58a8ed6e35967546101b9d9897d1d1fb2bfb55a420e863e604831e638: Status 404 returned error can't find the container with id 07a9edd58a8ed6e35967546101b9d9897d1d1fb2bfb55a420e863e604831e638 Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.378490 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-547db796ff-v7d9d"] Jan 20 09:22:11 crc kubenswrapper[4859]: W0120 09:22:11.385195 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45993ecc_ab33_409c_ba0d_7efd5ad4f5e3.slice/crio-c2d7dad5b6fa6427ecc7d9eaafb0db5c76b2abaa5b2fe4c10804c02ffdf4df2a WatchSource:0}: Error finding container c2d7dad5b6fa6427ecc7d9eaafb0db5c76b2abaa5b2fe4c10804c02ffdf4df2a: Status 404 returned error can't find the container with id c2d7dad5b6fa6427ecc7d9eaafb0db5c76b2abaa5b2fe4c10804c02ffdf4df2a Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.549354 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.554429 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" event={"ID":"f91229d3-0e84-49af-a380-b09b7aea6ecd","Type":"ContainerStarted","Data":"d637f901eb14b1d6519ff018808fbd8e9a7ddce4192ef8786720f6f5369cddac"} Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.554459 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" event={"ID":"f91229d3-0e84-49af-a380-b09b7aea6ecd","Type":"ContainerStarted","Data":"07a9edd58a8ed6e35967546101b9d9897d1d1fb2bfb55a420e863e604831e638"} Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.555040 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.556496 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" event={"ID":"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3","Type":"ContainerStarted","Data":"1261cdd610b741a2116a3052527128121d841300a7d6ef5614a79a4a4f762d3b"} Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.556838 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" event={"ID":"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3","Type":"ContainerStarted","Data":"c2d7dad5b6fa6427ecc7d9eaafb0db5c76b2abaa5b2fe4c10804c02ffdf4df2a"} Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.556854 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.557069 4859 patch_prober.go:28] interesting pod/route-controller-manager-66774c6f5d-dn2gt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.557097 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" podUID="f91229d3-0e84-49af-a380-b09b7aea6ecd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.559658 4859 patch_prober.go:28] interesting pod/controller-manager-547db796ff-v7d9d container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.559719 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" podUID="45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.560370 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.562247 4859 generic.go:334] "Generic (PLEG): container finished" podID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerID="52b23cd5ec33c131ec27c9fedcb3c1f02048e5a822e25939c07fee71b2e151be" exitCode=0 Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.562323 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvdhk" event={"ID":"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41","Type":"ContainerDied","Data":"52b23cd5ec33c131ec27c9fedcb3c1f02048e5a822e25939c07fee71b2e151be"} Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.577234 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" podStartSLOduration=3.577218017 podStartE2EDuration="3.577218017s" podCreationTimestamp="2026-01-20 09:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:22:11.575299526 +0000 UTC m=+206.331315712" watchObservedRunningTime="2026-01-20 09:22:11.577218017 +0000 UTC m=+206.333234193" Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.584425 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a86e29f-e02a-403e-88f7-031315bf5f49" path="/var/lib/kubelet/pods/5a86e29f-e02a-403e-88f7-031315bf5f49/volumes" Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.585054 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2929543-4f19-42f5-8f1d-851c8d5955c0" path="/var/lib/kubelet/pods/b2929543-4f19-42f5-8f1d-851c8d5955c0/volumes" Jan 20 09:22:11 crc kubenswrapper[4859]: I0120 09:22:11.593703 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" podStartSLOduration=3.593681475 podStartE2EDuration="3.593681475s" podCreationTimestamp="2026-01-20 09:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:22:11.592506174 +0000 UTC m=+206.348522350" watchObservedRunningTime="2026-01-20 09:22:11.593681475 +0000 UTC m=+206.349697651" Jan 20 09:22:12 crc kubenswrapper[4859]: I0120 09:22:12.572228 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx2vc" event={"ID":"a8295b62-6cc7-4fed-985e-268605a7e4f0","Type":"ContainerStarted","Data":"ec6f77eb99a5d9d0f19e1c79a1d47d1ad15d4f867b66a07dc86724ffa52c8d5a"} Jan 20 09:22:12 crc kubenswrapper[4859]: I0120 09:22:12.578118 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvdhk" event={"ID":"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41","Type":"ContainerStarted","Data":"4eed9fc3cba48aa455ffb5bc20e03edc20df8b20dc4b276c4d648044fca48153"} Jan 20 09:22:12 crc kubenswrapper[4859]: I0120 09:22:12.594151 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:12 crc kubenswrapper[4859]: I0120 09:22:12.600236 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:12 crc kubenswrapper[4859]: I0120 09:22:12.641668 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jvdhk" podStartSLOduration=3.226365099 podStartE2EDuration="59.641648131s" podCreationTimestamp="2026-01-20 09:21:13 +0000 UTC" firstStartedPulling="2026-01-20 09:21:15.75741999 +0000 UTC m=+150.513436176" lastFinishedPulling="2026-01-20 09:22:12.172703032 +0000 UTC m=+206.928719208" observedRunningTime="2026-01-20 09:22:12.638124398 +0000 UTC m=+207.394140574" watchObservedRunningTime="2026-01-20 09:22:12.641648131 +0000 UTC m=+207.397664307" Jan 20 09:22:12 crc kubenswrapper[4859]: I0120 09:22:12.647124 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nklcx" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerName="registry-server" probeResult="failure" output=< Jan 20 09:22:12 crc kubenswrapper[4859]: timeout: failed to connect service ":50051" within 1s Jan 20 09:22:12 crc kubenswrapper[4859]: > Jan 20 09:22:13 crc kubenswrapper[4859]: I0120 09:22:13.260388 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:22:13 crc kubenswrapper[4859]: I0120 09:22:13.583123 4859 generic.go:334] "Generic (PLEG): container finished" podID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerID="ec6f77eb99a5d9d0f19e1c79a1d47d1ad15d4f867b66a07dc86724ffa52c8d5a" exitCode=0 Jan 20 09:22:13 crc kubenswrapper[4859]: I0120 09:22:13.583157 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx2vc" event={"ID":"a8295b62-6cc7-4fed-985e-268605a7e4f0","Type":"ContainerDied","Data":"ec6f77eb99a5d9d0f19e1c79a1d47d1ad15d4f867b66a07dc86724ffa52c8d5a"} Jan 20 09:22:13 crc kubenswrapper[4859]: I0120 09:22:13.608969 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:22:13 crc kubenswrapper[4859]: I0120 09:22:13.609097 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:22:13 crc kubenswrapper[4859]: I0120 09:22:13.669234 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:22:14 crc kubenswrapper[4859]: I0120 09:22:14.339106 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:22:14 crc kubenswrapper[4859]: I0120 09:22:14.339425 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:22:14 crc kubenswrapper[4859]: I0120 09:22:14.339996 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" podUID="78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" containerName="oauth-openshift" containerID="cri-o://8be30240239378fdfc6a36502fc944467ba470513ff1a13c6a0d3656cbcfec91" gracePeriod=15 Jan 20 09:22:14 crc kubenswrapper[4859]: I0120 09:22:14.590617 4859 generic.go:334] "Generic (PLEG): container finished" podID="78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" containerID="8be30240239378fdfc6a36502fc944467ba470513ff1a13c6a0d3656cbcfec91" exitCode=0 Jan 20 09:22:14 crc kubenswrapper[4859]: I0120 09:22:14.590630 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" event={"ID":"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9","Type":"ContainerDied","Data":"8be30240239378fdfc6a36502fc944467ba470513ff1a13c6a0d3656cbcfec91"} Jan 20 09:22:15 crc kubenswrapper[4859]: I0120 09:22:15.405846 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jvdhk" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerName="registry-server" probeResult="failure" output=< Jan 20 09:22:15 crc kubenswrapper[4859]: timeout: failed to connect service ":50051" within 1s Jan 20 09:22:15 crc kubenswrapper[4859]: > Jan 20 09:22:15 crc kubenswrapper[4859]: I0120 09:22:15.884169 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.235530 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.333895 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-dir\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.333937 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-router-certs\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.333963 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-ocp-branding-template\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.333987 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-login\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.333997 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334032 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-provider-selection\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334058 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-serving-cert\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334073 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-session\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334098 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-policies\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334115 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-trusted-ca-bundle\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334146 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2n4r\" (UniqueName: \"kubernetes.io/projected/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-kube-api-access-s2n4r\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334183 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-cliconfig\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334215 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-service-ca\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334242 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-idp-0-file-data\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334258 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-error\") pod \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\" (UID: \"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9\") " Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.334436 4859 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.336381 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.336529 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.340390 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-kube-api-access-s2n4r" (OuterVolumeSpecName: "kube-api-access-s2n4r") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "kube-api-access-s2n4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.341373 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.341453 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.343919 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.344298 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.344311 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.344665 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.345316 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.345578 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.346053 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.349502 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" (UID: "78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435659 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435715 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435731 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435747 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435760 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435773 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435814 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435833 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435849 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435863 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435877 4859 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435889 4859 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.435902 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2n4r\" (UniqueName: \"kubernetes.io/projected/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9-kube-api-access-s2n4r\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.605235 4859 generic.go:334] "Generic (PLEG): container finished" podID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerID="39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e" exitCode=0 Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.605293 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sm4f" event={"ID":"cf69514d-13ef-4ba7-9a8a-1d2656df59fb","Type":"ContainerDied","Data":"39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e"} Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.607053 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" event={"ID":"78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9","Type":"ContainerDied","Data":"be229b3de973a247ee70bb3a7094fdb3f05b6b59a96c4f0f4f94b9feb0c1489f"} Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.607360 4859 scope.go:117] "RemoveContainer" containerID="8be30240239378fdfc6a36502fc944467ba470513ff1a13c6a0d3656cbcfec91" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.607281 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7fk4r" Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.610722 4859 generic.go:334] "Generic (PLEG): container finished" podID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerID="872587b39a61cb7b2373753cd22c26da19afc6f95dd129aba3907b444fba0453" exitCode=0 Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.610772 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnjx9" event={"ID":"0c0ea750-41ef-4b4e-a574-2e50b3563f8b","Type":"ContainerDied","Data":"872587b39a61cb7b2373753cd22c26da19afc6f95dd129aba3907b444fba0453"} Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.614131 4859 generic.go:334] "Generic (PLEG): container finished" podID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerID="71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688" exitCode=0 Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.614769 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cm2d" event={"ID":"662d5810-d101-40f8-9cf9-6e46d3177b6a","Type":"ContainerDied","Data":"71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688"} Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.683952 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7fk4r"] Jan 20 09:22:16 crc kubenswrapper[4859]: I0120 09:22:16.685458 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7fk4r"] Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.015742 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm427"] Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.530684 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2"] Jan 20 09:22:17 crc kubenswrapper[4859]: E0120 09:22:17.531155 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" containerName="oauth-openshift" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.531167 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" containerName="oauth-openshift" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.531257 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" containerName="oauth-openshift" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.531609 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.535958 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.536075 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.536259 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.536275 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.536315 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.536275 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.540699 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.541072 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.541100 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.541086 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.541100 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.541256 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550201 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550265 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-session\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550293 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-template-error\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550329 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srsxh\" (UniqueName: \"kubernetes.io/projected/0193a8a9-e976-4416-ab41-fb5c59e049f0-kube-api-access-srsxh\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550353 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0193a8a9-e976-4416-ab41-fb5c59e049f0-audit-dir\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550380 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550456 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550496 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550518 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550541 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550627 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-template-login\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550722 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-audit-policies\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550775 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.550880 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.554808 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.555815 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.567002 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.570924 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2"] Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.585726 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9" path="/var/lib/kubelet/pods/78d9953b-9ae0-4e0c-9a97-26c04fb8b7e9/volumes" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.621501 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnjx9" event={"ID":"0c0ea750-41ef-4b4e-a574-2e50b3563f8b","Type":"ContainerStarted","Data":"9a85f92736c094b89cea55aff19dd2f979419b2abd364818e3bc1690fbbcc331"} Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.623439 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cm2d" event={"ID":"662d5810-d101-40f8-9cf9-6e46d3177b6a","Type":"ContainerStarted","Data":"d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3"} Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.625595 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx2vc" event={"ID":"a8295b62-6cc7-4fed-985e-268605a7e4f0","Type":"ContainerStarted","Data":"9daba3f9837ec2e3a6e26a2d91b6ff56e8d89731a7b81d9bab1d5559a44d3469"} Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.628214 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sm4f" event={"ID":"cf69514d-13ef-4ba7-9a8a-1d2656df59fb","Type":"ContainerStarted","Data":"c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521"} Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.633742 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wm427" podUID="445caea8-7708-4332-b903-dd1b9409c756" containerName="registry-server" containerID="cri-o://523fcdb9eeb6074262461055eb869899dfd78916d7dcddb032b316544c654e6b" gracePeriod=2 Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.643608 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xnjx9" podStartSLOduration=5.220515054 podStartE2EDuration="1m7.643592063s" podCreationTimestamp="2026-01-20 09:21:10 +0000 UTC" firstStartedPulling="2026-01-20 09:21:14.665176896 +0000 UTC m=+149.421193072" lastFinishedPulling="2026-01-20 09:22:17.088253885 +0000 UTC m=+211.844270081" observedRunningTime="2026-01-20 09:22:17.641212479 +0000 UTC m=+212.397228655" watchObservedRunningTime="2026-01-20 09:22:17.643592063 +0000 UTC m=+212.399608229" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.651951 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.651992 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652010 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652036 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652051 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-template-login\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652094 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-audit-policies\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652109 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652158 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652209 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652250 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-session\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652273 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-template-error\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652316 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srsxh\" (UniqueName: \"kubernetes.io/projected/0193a8a9-e976-4416-ab41-fb5c59e049f0-kube-api-access-srsxh\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652341 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0193a8a9-e976-4416-ab41-fb5c59e049f0-audit-dir\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.652362 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.656248 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0193a8a9-e976-4416-ab41-fb5c59e049f0-audit-dir\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.658229 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.658826 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-audit-policies\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.659107 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.659363 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.659429 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.659468 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.660881 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-template-error\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.661524 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.662424 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.667910 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.668262 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-user-template-login\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.668807 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2cm2d" podStartSLOduration=5.087238544 podStartE2EDuration="1m7.668774053s" podCreationTimestamp="2026-01-20 09:21:10 +0000 UTC" firstStartedPulling="2026-01-20 09:21:14.482126282 +0000 UTC m=+149.238142468" lastFinishedPulling="2026-01-20 09:22:17.063661811 +0000 UTC m=+211.819677977" observedRunningTime="2026-01-20 09:22:17.668006322 +0000 UTC m=+212.424022498" watchObservedRunningTime="2026-01-20 09:22:17.668774053 +0000 UTC m=+212.424790229" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.675993 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0193a8a9-e976-4416-ab41-fb5c59e049f0-v4-0-config-system-session\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.679102 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srsxh\" (UniqueName: \"kubernetes.io/projected/0193a8a9-e976-4416-ab41-fb5c59e049f0-kube-api-access-srsxh\") pod \"oauth-openshift-77df6bdc9c-r9cx2\" (UID: \"0193a8a9-e976-4416-ab41-fb5c59e049f0\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.688282 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9sm4f" podStartSLOduration=5.006099867 podStartE2EDuration="1m7.688263541s" podCreationTimestamp="2026-01-20 09:21:10 +0000 UTC" firstStartedPulling="2026-01-20 09:21:14.664012514 +0000 UTC m=+149.420028690" lastFinishedPulling="2026-01-20 09:22:17.346176188 +0000 UTC m=+212.102192364" observedRunningTime="2026-01-20 09:22:17.686061692 +0000 UTC m=+212.442077868" watchObservedRunningTime="2026-01-20 09:22:17.688263541 +0000 UTC m=+212.444279717" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.705130 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sx2vc" podStartSLOduration=2.449597748 podStartE2EDuration="1m3.70511311s" podCreationTimestamp="2026-01-20 09:21:14 +0000 UTC" firstStartedPulling="2026-01-20 09:21:15.725688275 +0000 UTC m=+150.481704451" lastFinishedPulling="2026-01-20 09:22:16.981203597 +0000 UTC m=+211.737219813" observedRunningTime="2026-01-20 09:22:17.703883557 +0000 UTC m=+212.459899733" watchObservedRunningTime="2026-01-20 09:22:17.70511311 +0000 UTC m=+212.461129286" Jan 20 09:22:17 crc kubenswrapper[4859]: I0120 09:22:17.846044 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.349822 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2"] Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.645485 4859 generic.go:334] "Generic (PLEG): container finished" podID="445caea8-7708-4332-b903-dd1b9409c756" containerID="523fcdb9eeb6074262461055eb869899dfd78916d7dcddb032b316544c654e6b" exitCode=0 Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.645565 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm427" event={"ID":"445caea8-7708-4332-b903-dd1b9409c756","Type":"ContainerDied","Data":"523fcdb9eeb6074262461055eb869899dfd78916d7dcddb032b316544c654e6b"} Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.647638 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" event={"ID":"0193a8a9-e976-4416-ab41-fb5c59e049f0","Type":"ContainerStarted","Data":"c6ad7d9275b57c9d71824010b1ca76ffa1894b15a7933a79baf9ec4d0b4734fd"} Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.647674 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" event={"ID":"0193a8a9-e976-4416-ab41-fb5c59e049f0","Type":"ContainerStarted","Data":"053987e784fee0805dd60245313be3ad666571c11b749c872af7971b98cef7d3"} Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.647953 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.649704 4859 patch_prober.go:28] interesting pod/oauth-openshift-77df6bdc9c-r9cx2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" start-of-body= Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.649738 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" podUID="0193a8a9-e976-4416-ab41-fb5c59e049f0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.667972 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" podStartSLOduration=29.66795081 podStartE2EDuration="29.66795081s" podCreationTimestamp="2026-01-20 09:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:22:18.664668703 +0000 UTC m=+213.420684879" watchObservedRunningTime="2026-01-20 09:22:18.66795081 +0000 UTC m=+213.423966986" Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.906368 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.970420 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-utilities\") pod \"445caea8-7708-4332-b903-dd1b9409c756\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.970572 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-catalog-content\") pod \"445caea8-7708-4332-b903-dd1b9409c756\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.970609 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkkps\" (UniqueName: \"kubernetes.io/projected/445caea8-7708-4332-b903-dd1b9409c756-kube-api-access-nkkps\") pod \"445caea8-7708-4332-b903-dd1b9409c756\" (UID: \"445caea8-7708-4332-b903-dd1b9409c756\") " Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.971895 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-utilities" (OuterVolumeSpecName: "utilities") pod "445caea8-7708-4332-b903-dd1b9409c756" (UID: "445caea8-7708-4332-b903-dd1b9409c756"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:18 crc kubenswrapper[4859]: I0120 09:22:18.978930 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445caea8-7708-4332-b903-dd1b9409c756-kube-api-access-nkkps" (OuterVolumeSpecName: "kube-api-access-nkkps") pod "445caea8-7708-4332-b903-dd1b9409c756" (UID: "445caea8-7708-4332-b903-dd1b9409c756"). InnerVolumeSpecName "kube-api-access-nkkps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.006432 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "445caea8-7708-4332-b903-dd1b9409c756" (UID: "445caea8-7708-4332-b903-dd1b9409c756"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.072581 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.072617 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkkps\" (UniqueName: \"kubernetes.io/projected/445caea8-7708-4332-b903-dd1b9409c756-kube-api-access-nkkps\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.072630 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445caea8-7708-4332-b903-dd1b9409c756-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.660644 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm427" event={"ID":"445caea8-7708-4332-b903-dd1b9409c756","Type":"ContainerDied","Data":"d492dd272a0ea78b7584406b254f740075de684808d3e9671b85697fd3d104df"} Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.660726 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm427" Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.660768 4859 scope.go:117] "RemoveContainer" containerID="523fcdb9eeb6074262461055eb869899dfd78916d7dcddb032b316544c654e6b" Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.671036 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-77df6bdc9c-r9cx2" Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.689044 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm427"] Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.705054 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm427"] Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.715213 4859 scope.go:117] "RemoveContainer" containerID="c20b936cbc71869de3241a98baa45c81d8917058f57c3c4acda300659cb91ea2" Jan 20 09:22:19 crc kubenswrapper[4859]: I0120 09:22:19.802293 4859 scope.go:117] "RemoveContainer" containerID="7e09973b8f77e441a9073c1cb134cf173c175b5ed4dffbf10b2afb638bc5a8cc" Jan 20 09:22:20 crc kubenswrapper[4859]: I0120 09:22:20.910319 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:22:20 crc kubenswrapper[4859]: I0120 09:22:20.910371 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:22:20 crc kubenswrapper[4859]: I0120 09:22:20.977820 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:22:21 crc kubenswrapper[4859]: I0120 09:22:21.113233 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:22:21 crc kubenswrapper[4859]: I0120 09:22:21.114663 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:22:21 crc kubenswrapper[4859]: I0120 09:22:21.157038 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:22:21 crc kubenswrapper[4859]: I0120 09:22:21.326036 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:22:21 crc kubenswrapper[4859]: I0120 09:22:21.326131 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:22:21 crc kubenswrapper[4859]: I0120 09:22:21.372084 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:22:21 crc kubenswrapper[4859]: I0120 09:22:21.586862 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="445caea8-7708-4332-b903-dd1b9409c756" path="/var/lib/kubelet/pods/445caea8-7708-4332-b903-dd1b9409c756/volumes" Jan 20 09:22:21 crc kubenswrapper[4859]: I0120 09:22:21.607339 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:22:21 crc kubenswrapper[4859]: I0120 09:22:21.645272 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:22:22 crc kubenswrapper[4859]: I0120 09:22:22.757634 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:22:22 crc kubenswrapper[4859]: I0120 09:22:22.757755 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:22:24 crc kubenswrapper[4859]: I0120 09:22:24.408698 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:22:24 crc kubenswrapper[4859]: I0120 09:22:24.451228 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:22:24 crc kubenswrapper[4859]: I0120 09:22:24.712560 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:22:24 crc kubenswrapper[4859]: I0120 09:22:24.713926 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:22:24 crc kubenswrapper[4859]: I0120 09:22:24.777350 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:22:25 crc kubenswrapper[4859]: I0120 09:22:25.414447 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nklcx"] Jan 20 09:22:25 crc kubenswrapper[4859]: I0120 09:22:25.414870 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nklcx" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerName="registry-server" containerID="cri-o://49f641fbf5a65b04d3304f0b2e49992846db99d50d27ffb1359db1887bd53d11" gracePeriod=2 Jan 20 09:22:25 crc kubenswrapper[4859]: I0120 09:22:25.787594 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:22:26 crc kubenswrapper[4859]: I0120 09:22:26.718936 4859 generic.go:334] "Generic (PLEG): container finished" podID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerID="49f641fbf5a65b04d3304f0b2e49992846db99d50d27ffb1359db1887bd53d11" exitCode=0 Jan 20 09:22:26 crc kubenswrapper[4859]: I0120 09:22:26.719045 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nklcx" event={"ID":"e1957112-94e7-495e-8d4a-bb9bac57988c","Type":"ContainerDied","Data":"49f641fbf5a65b04d3304f0b2e49992846db99d50d27ffb1359db1887bd53d11"} Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.670318 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.725560 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nklcx" event={"ID":"e1957112-94e7-495e-8d4a-bb9bac57988c","Type":"ContainerDied","Data":"67b81ff418ddb614e8a33996e9bb53da1f239095b879cbf7e100aac70549c9d5"} Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.725582 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nklcx" Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.725621 4859 scope.go:117] "RemoveContainer" containerID="49f641fbf5a65b04d3304f0b2e49992846db99d50d27ffb1359db1887bd53d11" Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.744721 4859 scope.go:117] "RemoveContainer" containerID="ff40a2e5888fdd6155eb3bfd87727ce27acc71996cfe8970403413b3772f934b" Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.780257 4859 scope.go:117] "RemoveContainer" containerID="3b17f29fcdc8c0cf33dadbbf526184462949af8f482757348e521aefe93aa5a1" Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.793353 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h89vt\" (UniqueName: \"kubernetes.io/projected/e1957112-94e7-495e-8d4a-bb9bac57988c-kube-api-access-h89vt\") pod \"e1957112-94e7-495e-8d4a-bb9bac57988c\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.793576 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-utilities\") pod \"e1957112-94e7-495e-8d4a-bb9bac57988c\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.793711 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-catalog-content\") pod \"e1957112-94e7-495e-8d4a-bb9bac57988c\" (UID: \"e1957112-94e7-495e-8d4a-bb9bac57988c\") " Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.794739 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-utilities" (OuterVolumeSpecName: "utilities") pod "e1957112-94e7-495e-8d4a-bb9bac57988c" (UID: "e1957112-94e7-495e-8d4a-bb9bac57988c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.800130 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1957112-94e7-495e-8d4a-bb9bac57988c-kube-api-access-h89vt" (OuterVolumeSpecName: "kube-api-access-h89vt") pod "e1957112-94e7-495e-8d4a-bb9bac57988c" (UID: "e1957112-94e7-495e-8d4a-bb9bac57988c"). InnerVolumeSpecName "kube-api-access-h89vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.845643 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1957112-94e7-495e-8d4a-bb9bac57988c" (UID: "e1957112-94e7-495e-8d4a-bb9bac57988c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.894831 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h89vt\" (UniqueName: \"kubernetes.io/projected/e1957112-94e7-495e-8d4a-bb9bac57988c-kube-api-access-h89vt\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.894873 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:27 crc kubenswrapper[4859]: I0120 09:22:27.894887 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1957112-94e7-495e-8d4a-bb9bac57988c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:28 crc kubenswrapper[4859]: I0120 09:22:28.054074 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nklcx"] Jan 20 09:22:28 crc kubenswrapper[4859]: I0120 09:22:28.059274 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nklcx"] Jan 20 09:22:28 crc kubenswrapper[4859]: I0120 09:22:28.619454 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-547db796ff-v7d9d"] Jan 20 09:22:28 crc kubenswrapper[4859]: I0120 09:22:28.619981 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" podUID="45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" containerName="controller-manager" containerID="cri-o://1261cdd610b741a2116a3052527128121d841300a7d6ef5614a79a4a4f762d3b" gracePeriod=30 Jan 20 09:22:28 crc kubenswrapper[4859]: I0120 09:22:28.725195 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt"] Jan 20 09:22:28 crc kubenswrapper[4859]: I0120 09:22:28.725441 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" podUID="f91229d3-0e84-49af-a380-b09b7aea6ecd" containerName="route-controller-manager" containerID="cri-o://d637f901eb14b1d6519ff018808fbd8e9a7ddce4192ef8786720f6f5369cddac" gracePeriod=30 Jan 20 09:22:28 crc kubenswrapper[4859]: I0120 09:22:28.808918 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sx2vc"] Jan 20 09:22:28 crc kubenswrapper[4859]: I0120 09:22:28.809123 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sx2vc" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerName="registry-server" containerID="cri-o://9daba3f9837ec2e3a6e26a2d91b6ff56e8d89731a7b81d9bab1d5559a44d3469" gracePeriod=2 Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.580462 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" path="/var/lib/kubelet/pods/e1957112-94e7-495e-8d4a-bb9bac57988c/volumes" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.742206 4859 generic.go:334] "Generic (PLEG): container finished" podID="45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" containerID="1261cdd610b741a2116a3052527128121d841300a7d6ef5614a79a4a4f762d3b" exitCode=0 Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.742263 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" event={"ID":"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3","Type":"ContainerDied","Data":"1261cdd610b741a2116a3052527128121d841300a7d6ef5614a79a4a4f762d3b"} Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.743639 4859 generic.go:334] "Generic (PLEG): container finished" podID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerID="9daba3f9837ec2e3a6e26a2d91b6ff56e8d89731a7b81d9bab1d5559a44d3469" exitCode=0 Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.743682 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx2vc" event={"ID":"a8295b62-6cc7-4fed-985e-268605a7e4f0","Type":"ContainerDied","Data":"9daba3f9837ec2e3a6e26a2d91b6ff56e8d89731a7b81d9bab1d5559a44d3469"} Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.745930 4859 generic.go:334] "Generic (PLEG): container finished" podID="f91229d3-0e84-49af-a380-b09b7aea6ecd" containerID="d637f901eb14b1d6519ff018808fbd8e9a7ddce4192ef8786720f6f5369cddac" exitCode=0 Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.745956 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" event={"ID":"f91229d3-0e84-49af-a380-b09b7aea6ecd","Type":"ContainerDied","Data":"d637f901eb14b1d6519ff018808fbd8e9a7ddce4192ef8786720f6f5369cddac"} Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.890539 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.897845 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.912236 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp"] Jan 20 09:22:29 crc kubenswrapper[4859]: E0120 09:22:29.919260 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerName="extract-content" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.919471 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerName="extract-content" Jan 20 09:22:29 crc kubenswrapper[4859]: E0120 09:22:29.919667 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerName="extract-utilities" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.919775 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerName="extract-utilities" Jan 20 09:22:29 crc kubenswrapper[4859]: E0120 09:22:29.919935 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerName="registry-server" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.919991 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerName="registry-server" Jan 20 09:22:29 crc kubenswrapper[4859]: E0120 09:22:29.920046 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445caea8-7708-4332-b903-dd1b9409c756" containerName="registry-server" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.920107 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="445caea8-7708-4332-b903-dd1b9409c756" containerName="registry-server" Jan 20 09:22:29 crc kubenswrapper[4859]: E0120 09:22:29.920190 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerName="extract-utilities" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.920571 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerName="extract-utilities" Jan 20 09:22:29 crc kubenswrapper[4859]: E0120 09:22:29.928106 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerName="extract-content" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.929908 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerName="extract-content" Jan 20 09:22:29 crc kubenswrapper[4859]: E0120 09:22:29.930014 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445caea8-7708-4332-b903-dd1b9409c756" containerName="extract-content" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.932679 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="445caea8-7708-4332-b903-dd1b9409c756" containerName="extract-content" Jan 20 09:22:29 crc kubenswrapper[4859]: E0120 09:22:29.932816 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445caea8-7708-4332-b903-dd1b9409c756" containerName="extract-utilities" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.932927 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="445caea8-7708-4332-b903-dd1b9409c756" containerName="extract-utilities" Jan 20 09:22:29 crc kubenswrapper[4859]: E0120 09:22:29.932995 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerName="registry-server" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.933059 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerName="registry-server" Jan 20 09:22:29 crc kubenswrapper[4859]: E0120 09:22:29.933118 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91229d3-0e84-49af-a380-b09b7aea6ecd" containerName="route-controller-manager" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.933176 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91229d3-0e84-49af-a380-b09b7aea6ecd" containerName="route-controller-manager" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.933412 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" containerName="registry-server" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.933501 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91229d3-0e84-49af-a380-b09b7aea6ecd" containerName="route-controller-manager" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.933569 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1957112-94e7-495e-8d4a-bb9bac57988c" containerName="registry-server" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.933633 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="445caea8-7708-4332-b903-dd1b9409c756" containerName="registry-server" Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.934120 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp"] Jan 20 09:22:29 crc kubenswrapper[4859]: I0120 09:22:29.934309 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.018293 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-utilities\") pod \"a8295b62-6cc7-4fed-985e-268605a7e4f0\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.018404 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-catalog-content\") pod \"a8295b62-6cc7-4fed-985e-268605a7e4f0\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.018438 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26g24\" (UniqueName: \"kubernetes.io/projected/a8295b62-6cc7-4fed-985e-268605a7e4f0-kube-api-access-26g24\") pod \"a8295b62-6cc7-4fed-985e-268605a7e4f0\" (UID: \"a8295b62-6cc7-4fed-985e-268605a7e4f0\") " Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.018482 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-config\") pod \"f91229d3-0e84-49af-a380-b09b7aea6ecd\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.018513 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9jhj\" (UniqueName: \"kubernetes.io/projected/f91229d3-0e84-49af-a380-b09b7aea6ecd-kube-api-access-z9jhj\") pod \"f91229d3-0e84-49af-a380-b09b7aea6ecd\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.018548 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-client-ca\") pod \"f91229d3-0e84-49af-a380-b09b7aea6ecd\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.018585 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f91229d3-0e84-49af-a380-b09b7aea6ecd-serving-cert\") pod \"f91229d3-0e84-49af-a380-b09b7aea6ecd\" (UID: \"f91229d3-0e84-49af-a380-b09b7aea6ecd\") " Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.020183 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-utilities" (OuterVolumeSpecName: "utilities") pod "a8295b62-6cc7-4fed-985e-268605a7e4f0" (UID: "a8295b62-6cc7-4fed-985e-268605a7e4f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.020427 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-client-ca" (OuterVolumeSpecName: "client-ca") pod "f91229d3-0e84-49af-a380-b09b7aea6ecd" (UID: "f91229d3-0e84-49af-a380-b09b7aea6ecd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.020451 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-config" (OuterVolumeSpecName: "config") pod "f91229d3-0e84-49af-a380-b09b7aea6ecd" (UID: "f91229d3-0e84-49af-a380-b09b7aea6ecd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.023976 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91229d3-0e84-49af-a380-b09b7aea6ecd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f91229d3-0e84-49af-a380-b09b7aea6ecd" (UID: "f91229d3-0e84-49af-a380-b09b7aea6ecd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.024243 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91229d3-0e84-49af-a380-b09b7aea6ecd-kube-api-access-z9jhj" (OuterVolumeSpecName: "kube-api-access-z9jhj") pod "f91229d3-0e84-49af-a380-b09b7aea6ecd" (UID: "f91229d3-0e84-49af-a380-b09b7aea6ecd"). InnerVolumeSpecName "kube-api-access-z9jhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.024403 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8295b62-6cc7-4fed-985e-268605a7e4f0-kube-api-access-26g24" (OuterVolumeSpecName: "kube-api-access-26g24") pod "a8295b62-6cc7-4fed-985e-268605a7e4f0" (UID: "a8295b62-6cc7-4fed-985e-268605a7e4f0"). InnerVolumeSpecName "kube-api-access-26g24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.119722 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5913e9b7-c37f-4873-998e-e6282290d02a-client-ca\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.119794 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5913e9b7-c37f-4873-998e-e6282290d02a-config\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.119860 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x79mg\" (UniqueName: \"kubernetes.io/projected/5913e9b7-c37f-4873-998e-e6282290d02a-kube-api-access-x79mg\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.119947 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5913e9b7-c37f-4873-998e-e6282290d02a-serving-cert\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.120140 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26g24\" (UniqueName: \"kubernetes.io/projected/a8295b62-6cc7-4fed-985e-268605a7e4f0-kube-api-access-26g24\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.120154 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.120164 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9jhj\" (UniqueName: \"kubernetes.io/projected/f91229d3-0e84-49af-a380-b09b7aea6ecd-kube-api-access-z9jhj\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.120174 4859 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f91229d3-0e84-49af-a380-b09b7aea6ecd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.120184 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f91229d3-0e84-49af-a380-b09b7aea6ecd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.120192 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.144342 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8295b62-6cc7-4fed-985e-268605a7e4f0" (UID: "a8295b62-6cc7-4fed-985e-268605a7e4f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.221316 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5913e9b7-c37f-4873-998e-e6282290d02a-config\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.222928 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x79mg\" (UniqueName: \"kubernetes.io/projected/5913e9b7-c37f-4873-998e-e6282290d02a-kube-api-access-x79mg\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.223055 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5913e9b7-c37f-4873-998e-e6282290d02a-serving-cert\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.223181 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5913e9b7-c37f-4873-998e-e6282290d02a-client-ca\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.223273 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8295b62-6cc7-4fed-985e-268605a7e4f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.222553 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5913e9b7-c37f-4873-998e-e6282290d02a-config\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.224039 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5913e9b7-c37f-4873-998e-e6282290d02a-client-ca\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.230668 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5913e9b7-c37f-4873-998e-e6282290d02a-serving-cert\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.244947 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x79mg\" (UniqueName: \"kubernetes.io/projected/5913e9b7-c37f-4873-998e-e6282290d02a-kube-api-access-x79mg\") pod \"route-controller-manager-55cfbdc8f-twvrp\" (UID: \"5913e9b7-c37f-4873-998e-e6282290d02a\") " pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.254686 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.735768 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp"] Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.755358 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" event={"ID":"5913e9b7-c37f-4873-998e-e6282290d02a","Type":"ContainerStarted","Data":"efa896f53b0156893ff23e9725393dd61d24a9ec8dcce2dbdddd0d1b46835aac"} Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.757264 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.757353 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt" event={"ID":"f91229d3-0e84-49af-a380-b09b7aea6ecd","Type":"ContainerDied","Data":"07a9edd58a8ed6e35967546101b9d9897d1d1fb2bfb55a420e863e604831e638"} Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.757402 4859 scope.go:117] "RemoveContainer" containerID="d637f901eb14b1d6519ff018808fbd8e9a7ddce4192ef8786720f6f5369cddac" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.763968 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx2vc" event={"ID":"a8295b62-6cc7-4fed-985e-268605a7e4f0","Type":"ContainerDied","Data":"9a99bf0ec1238990a450d6e64fdfa542ec319b48891496141da77f4fe9be42dd"} Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.764017 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx2vc" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.782813 4859 scope.go:117] "RemoveContainer" containerID="9daba3f9837ec2e3a6e26a2d91b6ff56e8d89731a7b81d9bab1d5559a44d3469" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.793702 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt"] Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.795443 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66774c6f5d-dn2gt"] Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.799197 4859 scope.go:117] "RemoveContainer" containerID="ec6f77eb99a5d9d0f19e1c79a1d47d1ad15d4f867b66a07dc86724ffa52c8d5a" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.808984 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sx2vc"] Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.812639 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sx2vc"] Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.827711 4859 scope.go:117] "RemoveContainer" containerID="5bd5704ae30eaa6bfa0981ece12161659528cb7a69de9b058bbccfa69d89f778" Jan 20 09:22:30 crc kubenswrapper[4859]: I0120 09:22:30.893260 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.034684 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-config\") pod \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.034720 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trs84\" (UniqueName: \"kubernetes.io/projected/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-kube-api-access-trs84\") pod \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.034828 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-client-ca\") pod \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.034876 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-serving-cert\") pod \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.034899 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-proxy-ca-bundles\") pod \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\" (UID: \"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3\") " Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.035549 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-client-ca" (OuterVolumeSpecName: "client-ca") pod "45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" (UID: "45993ecc-ab33-409c-ba0d-7efd5ad4f5e3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.035566 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" (UID: "45993ecc-ab33-409c-ba0d-7efd5ad4f5e3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.035728 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-config" (OuterVolumeSpecName: "config") pod "45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" (UID: "45993ecc-ab33-409c-ba0d-7efd5ad4f5e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.039535 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" (UID: "45993ecc-ab33-409c-ba0d-7efd5ad4f5e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.039564 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-kube-api-access-trs84" (OuterVolumeSpecName: "kube-api-access-trs84") pod "45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" (UID: "45993ecc-ab33-409c-ba0d-7efd5ad4f5e3"). InnerVolumeSpecName "kube-api-access-trs84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.135929 4859 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.135961 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trs84\" (UniqueName: \"kubernetes.io/projected/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-kube-api-access-trs84\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.135970 4859 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.135980 4859 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.135990 4859 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.370739 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.579420 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8295b62-6cc7-4fed-985e-268605a7e4f0" path="/var/lib/kubelet/pods/a8295b62-6cc7-4fed-985e-268605a7e4f0/volumes" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.580094 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91229d3-0e84-49af-a380-b09b7aea6ecd" path="/var/lib/kubelet/pods/f91229d3-0e84-49af-a380-b09b7aea6ecd/volumes" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.771918 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" event={"ID":"45993ecc-ab33-409c-ba0d-7efd5ad4f5e3","Type":"ContainerDied","Data":"c2d7dad5b6fa6427ecc7d9eaafb0db5c76b2abaa5b2fe4c10804c02ffdf4df2a"} Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.771973 4859 scope.go:117] "RemoveContainer" containerID="1261cdd610b741a2116a3052527128121d841300a7d6ef5614a79a4a4f762d3b" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.772040 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.801633 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-547db796ff-v7d9d"] Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.806071 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-547db796ff-v7d9d"] Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.872277 4859 patch_prober.go:28] interesting pod/controller-manager-547db796ff-v7d9d container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 09:22:31 crc kubenswrapper[4859]: I0120 09:22:31.872352 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-547db796ff-v7d9d" podUID="45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.559956 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr"] Jan 20 09:22:32 crc kubenswrapper[4859]: E0120 09:22:32.560423 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" containerName="controller-manager" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.560434 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" containerName="controller-manager" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.560870 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" containerName="controller-manager" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.562519 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.566615 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.566866 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.567130 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.567411 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.567585 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.571482 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.574872 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr"] Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.577178 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.755804 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facca86e-e733-425a-b7f5-8403e6e632b4-config\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.755872 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/facca86e-e733-425a-b7f5-8403e6e632b4-serving-cert\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.755907 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgsx5\" (UniqueName: \"kubernetes.io/projected/facca86e-e733-425a-b7f5-8403e6e632b4-kube-api-access-qgsx5\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.755938 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/facca86e-e733-425a-b7f5-8403e6e632b4-client-ca\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.755959 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/facca86e-e733-425a-b7f5-8403e6e632b4-proxy-ca-bundles\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.782438 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" event={"ID":"5913e9b7-c37f-4873-998e-e6282290d02a","Type":"ContainerStarted","Data":"9cc39e8f527d11397d6c767db66c1fec3806b6ebb7c65e160859a645353b9527"} Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.782660 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.790663 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.806881 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55cfbdc8f-twvrp" podStartSLOduration=4.806859095 podStartE2EDuration="4.806859095s" podCreationTimestamp="2026-01-20 09:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:22:32.797682111 +0000 UTC m=+227.553698287" watchObservedRunningTime="2026-01-20 09:22:32.806859095 +0000 UTC m=+227.562875271" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.857206 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facca86e-e733-425a-b7f5-8403e6e632b4-config\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.857257 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/facca86e-e733-425a-b7f5-8403e6e632b4-serving-cert\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.857286 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgsx5\" (UniqueName: \"kubernetes.io/projected/facca86e-e733-425a-b7f5-8403e6e632b4-kube-api-access-qgsx5\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.857312 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/facca86e-e733-425a-b7f5-8403e6e632b4-client-ca\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.857326 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/facca86e-e733-425a-b7f5-8403e6e632b4-proxy-ca-bundles\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.858365 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/facca86e-e733-425a-b7f5-8403e6e632b4-client-ca\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.858676 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facca86e-e733-425a-b7f5-8403e6e632b4-config\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.859836 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/facca86e-e733-425a-b7f5-8403e6e632b4-proxy-ca-bundles\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.863164 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/facca86e-e733-425a-b7f5-8403e6e632b4-serving-cert\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:32 crc kubenswrapper[4859]: I0120 09:22:32.884257 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgsx5\" (UniqueName: \"kubernetes.io/projected/facca86e-e733-425a-b7f5-8403e6e632b4-kube-api-access-qgsx5\") pod \"controller-manager-5cb569b9c7-wx6rr\" (UID: \"facca86e-e733-425a-b7f5-8403e6e632b4\") " pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:33 crc kubenswrapper[4859]: I0120 09:22:33.181008 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:33 crc kubenswrapper[4859]: I0120 09:22:33.597402 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45993ecc-ab33-409c-ba0d-7efd5ad4f5e3" path="/var/lib/kubelet/pods/45993ecc-ab33-409c-ba0d-7efd5ad4f5e3/volumes" Jan 20 09:22:33 crc kubenswrapper[4859]: I0120 09:22:33.652763 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr"] Jan 20 09:22:33 crc kubenswrapper[4859]: W0120 09:22:33.653634 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfacca86e_e733_425a_b7f5_8403e6e632b4.slice/crio-256e19bb0df07d8b3f50988e8c8572fde246692c5ecd6f9554434d1a7f713c9b WatchSource:0}: Error finding container 256e19bb0df07d8b3f50988e8c8572fde246692c5ecd6f9554434d1a7f713c9b: Status 404 returned error can't find the container with id 256e19bb0df07d8b3f50988e8c8572fde246692c5ecd6f9554434d1a7f713c9b Jan 20 09:22:33 crc kubenswrapper[4859]: I0120 09:22:33.791834 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" event={"ID":"facca86e-e733-425a-b7f5-8403e6e632b4","Type":"ContainerStarted","Data":"256e19bb0df07d8b3f50988e8c8572fde246692c5ecd6f9554434d1a7f713c9b"} Jan 20 09:22:34 crc kubenswrapper[4859]: I0120 09:22:34.796554 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" event={"ID":"facca86e-e733-425a-b7f5-8403e6e632b4","Type":"ContainerStarted","Data":"7ea7bec9ccbd726dc32bf313ff8ff1a50dd1ceabd3d7244746d00d8705133813"} Jan 20 09:22:34 crc kubenswrapper[4859]: I0120 09:22:34.796911 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:34 crc kubenswrapper[4859]: I0120 09:22:34.801434 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" Jan 20 09:22:34 crc kubenswrapper[4859]: I0120 09:22:34.815941 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cb569b9c7-wx6rr" podStartSLOduration=6.815925766 podStartE2EDuration="6.815925766s" podCreationTimestamp="2026-01-20 09:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:22:34.815049123 +0000 UTC m=+229.571065309" watchObservedRunningTime="2026-01-20 09:22:34.815925766 +0000 UTC m=+229.571941942" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.208499 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9sm4f"] Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.208815 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9sm4f" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerName="registry-server" containerID="cri-o://c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521" gracePeriod=2 Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.652550 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.701936 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjw2d\" (UniqueName: \"kubernetes.io/projected/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-kube-api-access-pjw2d\") pod \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.702311 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-catalog-content\") pod \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.702383 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-utilities\") pod \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\" (UID: \"cf69514d-13ef-4ba7-9a8a-1d2656df59fb\") " Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.703292 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-utilities" (OuterVolumeSpecName: "utilities") pod "cf69514d-13ef-4ba7-9a8a-1d2656df59fb" (UID: "cf69514d-13ef-4ba7-9a8a-1d2656df59fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.703511 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.710414 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-kube-api-access-pjw2d" (OuterVolumeSpecName: "kube-api-access-pjw2d") pod "cf69514d-13ef-4ba7-9a8a-1d2656df59fb" (UID: "cf69514d-13ef-4ba7-9a8a-1d2656df59fb"). InnerVolumeSpecName "kube-api-access-pjw2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.770951 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf69514d-13ef-4ba7-9a8a-1d2656df59fb" (UID: "cf69514d-13ef-4ba7-9a8a-1d2656df59fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.803929 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.803957 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjw2d\" (UniqueName: \"kubernetes.io/projected/cf69514d-13ef-4ba7-9a8a-1d2656df59fb-kube-api-access-pjw2d\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.805847 4859 generic.go:334] "Generic (PLEG): container finished" podID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerID="c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521" exitCode=0 Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.805932 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sm4f" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.805901 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sm4f" event={"ID":"cf69514d-13ef-4ba7-9a8a-1d2656df59fb","Type":"ContainerDied","Data":"c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521"} Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.805989 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sm4f" event={"ID":"cf69514d-13ef-4ba7-9a8a-1d2656df59fb","Type":"ContainerDied","Data":"b7b203c3fd105c6f23f5e8478b09384ed51710a81e32b4e05b74f07ee2e0a1ad"} Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.806016 4859 scope.go:117] "RemoveContainer" containerID="c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.824598 4859 scope.go:117] "RemoveContainer" containerID="39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.835158 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9sm4f"] Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.841550 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9sm4f"] Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.861728 4859 scope.go:117] "RemoveContainer" containerID="945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.881498 4859 scope.go:117] "RemoveContainer" containerID="c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521" Jan 20 09:22:35 crc kubenswrapper[4859]: E0120 09:22:35.881971 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521\": container with ID starting with c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521 not found: ID does not exist" containerID="c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.882009 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521"} err="failed to get container status \"c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521\": rpc error: code = NotFound desc = could not find container \"c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521\": container with ID starting with c8f30452c00f130b6a39bc1c4e385b28c5e131ac3a790f3863221f92b707d521 not found: ID does not exist" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.882034 4859 scope.go:117] "RemoveContainer" containerID="39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e" Jan 20 09:22:35 crc kubenswrapper[4859]: E0120 09:22:35.882361 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e\": container with ID starting with 39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e not found: ID does not exist" containerID="39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.882403 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e"} err="failed to get container status \"39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e\": rpc error: code = NotFound desc = could not find container \"39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e\": container with ID starting with 39c1b241bc22c0cab0ebbb16614ebb47e5e5587fc0b108ed0005f1437522af5e not found: ID does not exist" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.882424 4859 scope.go:117] "RemoveContainer" containerID="945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474" Jan 20 09:22:35 crc kubenswrapper[4859]: E0120 09:22:35.883156 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474\": container with ID starting with 945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474 not found: ID does not exist" containerID="945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474" Jan 20 09:22:35 crc kubenswrapper[4859]: I0120 09:22:35.883184 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474"} err="failed to get container status \"945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474\": rpc error: code = NotFound desc = could not find container \"945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474\": container with ID starting with 945c169a31e99922c38124002e851cfc8534da5126d701600782ee27011e6474 not found: ID does not exist" Jan 20 09:22:37 crc kubenswrapper[4859]: I0120 09:22:37.580986 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" path="/var/lib/kubelet/pods/cf69514d-13ef-4ba7-9a8a-1d2656df59fb/volumes" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.070578 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xnjx9"] Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.070860 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xnjx9" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerName="registry-server" containerID="cri-o://9a85f92736c094b89cea55aff19dd2f979419b2abd364818e3bc1690fbbcc331" gracePeriod=30 Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.087928 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2cm2d"] Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.088162 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2cm2d" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerName="registry-server" containerID="cri-o://d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3" gracePeriod=30 Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.090959 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rpkn9"] Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.091156 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" podUID="e5c5042b-9158-4a46-b771-19f91eab097f" containerName="marketplace-operator" containerID="cri-o://06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5" gracePeriod=30 Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.103486 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcvpq"] Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.108508 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rqpn6"] Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.109706 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qcvpq" podUID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerName="registry-server" containerID="cri-o://1bc7b8287fd2d00d2b9fbf636f2de3c7043f3095dac2899d129c30b906938acb" gracePeriod=30 Jan 20 09:22:38 crc kubenswrapper[4859]: E0120 09:22:38.112519 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerName="extract-content" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.112568 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerName="extract-content" Jan 20 09:22:38 crc kubenswrapper[4859]: E0120 09:22:38.112597 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerName="registry-server" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.112612 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerName="registry-server" Jan 20 09:22:38 crc kubenswrapper[4859]: E0120 09:22:38.112657 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerName="extract-utilities" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.112672 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerName="extract-utilities" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.112999 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf69514d-13ef-4ba7-9a8a-1d2656df59fb" containerName="registry-server" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.114390 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvdhk"] Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.114698 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jvdhk" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerName="registry-server" containerID="cri-o://4eed9fc3cba48aa455ffb5bc20e03edc20df8b20dc4b276c4d648044fca48153" gracePeriod=30 Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.114775 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.127124 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rqpn6"] Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.233979 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d30b8424-c2b6-4ae5-9790-74198806c882-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rqpn6\" (UID: \"d30b8424-c2b6-4ae5-9790-74198806c882\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.234317 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d30b8424-c2b6-4ae5-9790-74198806c882-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rqpn6\" (UID: \"d30b8424-c2b6-4ae5-9790-74198806c882\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.234342 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdvmg\" (UniqueName: \"kubernetes.io/projected/d30b8424-c2b6-4ae5-9790-74198806c882-kube-api-access-rdvmg\") pod \"marketplace-operator-79b997595-rqpn6\" (UID: \"d30b8424-c2b6-4ae5-9790-74198806c882\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.335577 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d30b8424-c2b6-4ae5-9790-74198806c882-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rqpn6\" (UID: \"d30b8424-c2b6-4ae5-9790-74198806c882\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.335628 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdvmg\" (UniqueName: \"kubernetes.io/projected/d30b8424-c2b6-4ae5-9790-74198806c882-kube-api-access-rdvmg\") pod \"marketplace-operator-79b997595-rqpn6\" (UID: \"d30b8424-c2b6-4ae5-9790-74198806c882\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.335656 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d30b8424-c2b6-4ae5-9790-74198806c882-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rqpn6\" (UID: \"d30b8424-c2b6-4ae5-9790-74198806c882\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.337035 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d30b8424-c2b6-4ae5-9790-74198806c882-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rqpn6\" (UID: \"d30b8424-c2b6-4ae5-9790-74198806c882\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.342894 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d30b8424-c2b6-4ae5-9790-74198806c882-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rqpn6\" (UID: \"d30b8424-c2b6-4ae5-9790-74198806c882\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.360541 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdvmg\" (UniqueName: \"kubernetes.io/projected/d30b8424-c2b6-4ae5-9790-74198806c882-kube-api-access-rdvmg\") pod \"marketplace-operator-79b997595-rqpn6\" (UID: \"d30b8424-c2b6-4ae5-9790-74198806c882\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.443626 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.565768 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.752436 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-utilities\") pod \"662d5810-d101-40f8-9cf9-6e46d3177b6a\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.752478 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggch5\" (UniqueName: \"kubernetes.io/projected/662d5810-d101-40f8-9cf9-6e46d3177b6a-kube-api-access-ggch5\") pod \"662d5810-d101-40f8-9cf9-6e46d3177b6a\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.752540 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-catalog-content\") pod \"662d5810-d101-40f8-9cf9-6e46d3177b6a\" (UID: \"662d5810-d101-40f8-9cf9-6e46d3177b6a\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.753315 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-utilities" (OuterVolumeSpecName: "utilities") pod "662d5810-d101-40f8-9cf9-6e46d3177b6a" (UID: "662d5810-d101-40f8-9cf9-6e46d3177b6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.761917 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662d5810-d101-40f8-9cf9-6e46d3177b6a-kube-api-access-ggch5" (OuterVolumeSpecName: "kube-api-access-ggch5") pod "662d5810-d101-40f8-9cf9-6e46d3177b6a" (UID: "662d5810-d101-40f8-9cf9-6e46d3177b6a"). InnerVolumeSpecName "kube-api-access-ggch5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.804542 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.811477 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rqpn6"] Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.817771 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "662d5810-d101-40f8-9cf9-6e46d3177b6a" (UID: "662d5810-d101-40f8-9cf9-6e46d3177b6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.824444 4859 generic.go:334] "Generic (PLEG): container finished" podID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerID="4eed9fc3cba48aa455ffb5bc20e03edc20df8b20dc4b276c4d648044fca48153" exitCode=0 Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.824497 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvdhk" event={"ID":"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41","Type":"ContainerDied","Data":"4eed9fc3cba48aa455ffb5bc20e03edc20df8b20dc4b276c4d648044fca48153"} Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.827698 4859 generic.go:334] "Generic (PLEG): container finished" podID="e5c5042b-9158-4a46-b771-19f91eab097f" containerID="06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5" exitCode=0 Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.827745 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" event={"ID":"e5c5042b-9158-4a46-b771-19f91eab097f","Type":"ContainerDied","Data":"06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5"} Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.827763 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" event={"ID":"e5c5042b-9158-4a46-b771-19f91eab097f","Type":"ContainerDied","Data":"e5b1a6f13f35925a3498b2fd759c60d3ba02d19ee9a7c7b8a53dc48b77b64ae4"} Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.827827 4859 scope.go:117] "RemoveContainer" containerID="06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.827925 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rpkn9" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.836146 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.837387 4859 generic.go:334] "Generic (PLEG): container finished" podID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerID="9a85f92736c094b89cea55aff19dd2f979419b2abd364818e3bc1690fbbcc331" exitCode=0 Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.837435 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnjx9" event={"ID":"0c0ea750-41ef-4b4e-a574-2e50b3563f8b","Type":"ContainerDied","Data":"9a85f92736c094b89cea55aff19dd2f979419b2abd364818e3bc1690fbbcc331"} Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.847422 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.852201 4859 generic.go:334] "Generic (PLEG): container finished" podID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerID="1bc7b8287fd2d00d2b9fbf636f2de3c7043f3095dac2899d129c30b906938acb" exitCode=0 Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.852289 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvpq" event={"ID":"2190970d-eb97-4db5-8cb2-ad14997411ab","Type":"ContainerDied","Data":"1bc7b8287fd2d00d2b9fbf636f2de3c7043f3095dac2899d129c30b906938acb"} Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.853648 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggch5\" (UniqueName: \"kubernetes.io/projected/662d5810-d101-40f8-9cf9-6e46d3177b6a-kube-api-access-ggch5\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.853672 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.853682 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662d5810-d101-40f8-9cf9-6e46d3177b6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.856775 4859 generic.go:334] "Generic (PLEG): container finished" podID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerID="d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3" exitCode=0 Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.856832 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cm2d" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.856831 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cm2d" event={"ID":"662d5810-d101-40f8-9cf9-6e46d3177b6a","Type":"ContainerDied","Data":"d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3"} Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.856994 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cm2d" event={"ID":"662d5810-d101-40f8-9cf9-6e46d3177b6a","Type":"ContainerDied","Data":"05f6e9e0ad0fb4527966a68390312e9f1a6d43044d576cb72917afc4259beb7b"} Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.869108 4859 scope.go:117] "RemoveContainer" containerID="06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5" Jan 20 09:22:38 crc kubenswrapper[4859]: E0120 09:22:38.869953 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5\": container with ID starting with 06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5 not found: ID does not exist" containerID="06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.869985 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5"} err="failed to get container status \"06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5\": rpc error: code = NotFound desc = could not find container \"06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5\": container with ID starting with 06a806a05b63a38a2b168449abb842751d85e6295e6adf99dd829dd77e9bc0b5 not found: ID does not exist" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.870003 4859 scope.go:117] "RemoveContainer" containerID="1bc7b8287fd2d00d2b9fbf636f2de3c7043f3095dac2899d129c30b906938acb" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.901205 4859 scope.go:117] "RemoveContainer" containerID="e2c3c5be311010aa13a2b148dfa9381206beb074fb23d0b59e457bfdd902a3a1" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.908114 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.945244 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2cm2d"] Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.948531 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2cm2d"] Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.952881 4859 scope.go:117] "RemoveContainer" containerID="505e8487ffce8ff8d31d48afeefcbad8fa7caf45acb7205f65fcc3260b6bac6e" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.954758 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfvqg\" (UniqueName: \"kubernetes.io/projected/e5c5042b-9158-4a46-b771-19f91eab097f-kube-api-access-sfvqg\") pod \"e5c5042b-9158-4a46-b771-19f91eab097f\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.955769 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e5c5042b-9158-4a46-b771-19f91eab097f" (UID: "e5c5042b-9158-4a46-b771-19f91eab097f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.954856 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-trusted-ca\") pod \"e5c5042b-9158-4a46-b771-19f91eab097f\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.958379 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssgwr\" (UniqueName: \"kubernetes.io/projected/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-kube-api-access-ssgwr\") pod \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.958463 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzs94\" (UniqueName: \"kubernetes.io/projected/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-kube-api-access-bzs94\") pod \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.958524 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87j4d\" (UniqueName: \"kubernetes.io/projected/2190970d-eb97-4db5-8cb2-ad14997411ab-kube-api-access-87j4d\") pod \"2190970d-eb97-4db5-8cb2-ad14997411ab\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.958598 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-utilities\") pod \"2190970d-eb97-4db5-8cb2-ad14997411ab\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.958663 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-catalog-content\") pod \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.958710 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-utilities\") pod \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\" (UID: \"0c0ea750-41ef-4b4e-a574-2e50b3563f8b\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.958763 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-operator-metrics\") pod \"e5c5042b-9158-4a46-b771-19f91eab097f\" (UID: \"e5c5042b-9158-4a46-b771-19f91eab097f\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.958844 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-catalog-content\") pod \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.958903 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-utilities\") pod \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\" (UID: \"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.958935 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-catalog-content\") pod \"2190970d-eb97-4db5-8cb2-ad14997411ab\" (UID: \"2190970d-eb97-4db5-8cb2-ad14997411ab\") " Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.959399 4859 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.962430 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-utilities" (OuterVolumeSpecName: "utilities") pod "0c0ea750-41ef-4b4e-a574-2e50b3563f8b" (UID: "0c0ea750-41ef-4b4e-a574-2e50b3563f8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.962518 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-utilities" (OuterVolumeSpecName: "utilities") pod "2190970d-eb97-4db5-8cb2-ad14997411ab" (UID: "2190970d-eb97-4db5-8cb2-ad14997411ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.962642 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-utilities" (OuterVolumeSpecName: "utilities") pod "73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" (UID: "73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.965474 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e5c5042b-9158-4a46-b771-19f91eab097f" (UID: "e5c5042b-9158-4a46-b771-19f91eab097f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.966203 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-kube-api-access-bzs94" (OuterVolumeSpecName: "kube-api-access-bzs94") pod "0c0ea750-41ef-4b4e-a574-2e50b3563f8b" (UID: "0c0ea750-41ef-4b4e-a574-2e50b3563f8b"). InnerVolumeSpecName "kube-api-access-bzs94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.966230 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2190970d-eb97-4db5-8cb2-ad14997411ab-kube-api-access-87j4d" (OuterVolumeSpecName: "kube-api-access-87j4d") pod "2190970d-eb97-4db5-8cb2-ad14997411ab" (UID: "2190970d-eb97-4db5-8cb2-ad14997411ab"). InnerVolumeSpecName "kube-api-access-87j4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.966756 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c5042b-9158-4a46-b771-19f91eab097f-kube-api-access-sfvqg" (OuterVolumeSpecName: "kube-api-access-sfvqg") pod "e5c5042b-9158-4a46-b771-19f91eab097f" (UID: "e5c5042b-9158-4a46-b771-19f91eab097f"). InnerVolumeSpecName "kube-api-access-sfvqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.969614 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-kube-api-access-ssgwr" (OuterVolumeSpecName: "kube-api-access-ssgwr") pod "73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" (UID: "73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41"). InnerVolumeSpecName "kube-api-access-ssgwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.994608 4859 scope.go:117] "RemoveContainer" containerID="d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3" Jan 20 09:22:38 crc kubenswrapper[4859]: I0120 09:22:38.997726 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2190970d-eb97-4db5-8cb2-ad14997411ab" (UID: "2190970d-eb97-4db5-8cb2-ad14997411ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.025097 4859 scope.go:117] "RemoveContainer" containerID="71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.029932 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c0ea750-41ef-4b4e-a574-2e50b3563f8b" (UID: "0c0ea750-41ef-4b4e-a574-2e50b3563f8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.060811 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzs94\" (UniqueName: \"kubernetes.io/projected/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-kube-api-access-bzs94\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.060843 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87j4d\" (UniqueName: \"kubernetes.io/projected/2190970d-eb97-4db5-8cb2-ad14997411ab-kube-api-access-87j4d\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.060858 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.060872 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.060889 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0ea750-41ef-4b4e-a574-2e50b3563f8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.060901 4859 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e5c5042b-9158-4a46-b771-19f91eab097f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.060912 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.060923 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2190970d-eb97-4db5-8cb2-ad14997411ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.060949 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfvqg\" (UniqueName: \"kubernetes.io/projected/e5c5042b-9158-4a46-b771-19f91eab097f-kube-api-access-sfvqg\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.060959 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssgwr\" (UniqueName: \"kubernetes.io/projected/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-kube-api-access-ssgwr\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.061697 4859 scope.go:117] "RemoveContainer" containerID="505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.087040 4859 scope.go:117] "RemoveContainer" containerID="d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3" Jan 20 09:22:39 crc kubenswrapper[4859]: E0120 09:22:39.087461 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3\": container with ID starting with d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3 not found: ID does not exist" containerID="d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.087510 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3"} err="failed to get container status \"d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3\": rpc error: code = NotFound desc = could not find container \"d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3\": container with ID starting with d59ee8958395d7f92a5c6b4c0ad6f476d6432eab35a56e87b0bc6900c7d8bea3 not found: ID does not exist" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.087544 4859 scope.go:117] "RemoveContainer" containerID="71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688" Jan 20 09:22:39 crc kubenswrapper[4859]: E0120 09:22:39.088013 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688\": container with ID starting with 71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688 not found: ID does not exist" containerID="71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.088043 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688"} err="failed to get container status \"71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688\": rpc error: code = NotFound desc = could not find container \"71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688\": container with ID starting with 71d0749c56efe5d3ea9c5050015533b74f8594d0f7cb29f3035258baec8e2688 not found: ID does not exist" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.088069 4859 scope.go:117] "RemoveContainer" containerID="505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686" Jan 20 09:22:39 crc kubenswrapper[4859]: E0120 09:22:39.089206 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686\": container with ID starting with 505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686 not found: ID does not exist" containerID="505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.089245 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686"} err="failed to get container status \"505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686\": rpc error: code = NotFound desc = could not find container \"505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686\": container with ID starting with 505f50ba370da98cc996d6b46e2c4826fe3e57d35aff853772665e691e4fd686 not found: ID does not exist" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.152642 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rpkn9"] Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.156227 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rpkn9"] Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.158578 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" (UID: "73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.163222 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.586401 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" path="/var/lib/kubelet/pods/662d5810-d101-40f8-9cf9-6e46d3177b6a/volumes" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.587355 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c5042b-9158-4a46-b771-19f91eab097f" path="/var/lib/kubelet/pods/e5c5042b-9158-4a46-b771-19f91eab097f/volumes" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.866623 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvdhk" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.866613 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvdhk" event={"ID":"73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41","Type":"ContainerDied","Data":"8280ef27db03fa20e66a3d68a6575ad95d8e6b5f4e0b93b414979658b27c84af"} Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.868023 4859 scope.go:117] "RemoveContainer" containerID="4eed9fc3cba48aa455ffb5bc20e03edc20df8b20dc4b276c4d648044fca48153" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.869449 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" event={"ID":"d30b8424-c2b6-4ae5-9790-74198806c882","Type":"ContainerStarted","Data":"9a2faf9a11447eb06e870e9bd300531962fdb942acfcd67400ad0e9202f27c8a"} Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.869496 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" event={"ID":"d30b8424-c2b6-4ae5-9790-74198806c882","Type":"ContainerStarted","Data":"ba322cbd82febb53e6a3b0d70f90a16734563297090f35c446d119908723c298"} Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.869697 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.875435 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnjx9" event={"ID":"0c0ea750-41ef-4b4e-a574-2e50b3563f8b","Type":"ContainerDied","Data":"aa9453135751697b6d09e88cccb9d00f6a6f229552c9671db69d32b6afd2f943"} Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.879664 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnjx9" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.880605 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qcvpq" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.881317 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.881393 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcvpq" event={"ID":"2190970d-eb97-4db5-8cb2-ad14997411ab","Type":"ContainerDied","Data":"0e1c5fe67f6374ad7c847d4b99c8d0791d1ac2d0127bd487696541b660189072"} Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.883763 4859 scope.go:117] "RemoveContainer" containerID="52b23cd5ec33c131ec27c9fedcb3c1f02048e5a822e25939c07fee71b2e151be" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.889098 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rqpn6" podStartSLOduration=1.889076813 podStartE2EDuration="1.889076813s" podCreationTimestamp="2026-01-20 09:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:22:39.88670319 +0000 UTC m=+234.642719376" watchObservedRunningTime="2026-01-20 09:22:39.889076813 +0000 UTC m=+234.645092989" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.901843 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvdhk"] Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.909256 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jvdhk"] Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.917122 4859 scope.go:117] "RemoveContainer" containerID="df6fb8771a1826ee187f56161e672b912f545d550de6cb209bb2736ab07736d1" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.945190 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xnjx9"] Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.948868 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xnjx9"] Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.958094 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcvpq"] Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.967018 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcvpq"] Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.971224 4859 scope.go:117] "RemoveContainer" containerID="9a85f92736c094b89cea55aff19dd2f979419b2abd364818e3bc1690fbbcc331" Jan 20 09:22:39 crc kubenswrapper[4859]: I0120 09:22:39.984276 4859 scope.go:117] "RemoveContainer" containerID="872587b39a61cb7b2373753cd22c26da19afc6f95dd129aba3907b444fba0453" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.002983 4859 scope.go:117] "RemoveContainer" containerID="aa1281fb40a65cd866f8fe24d8890ff7ffa5b051c5e5343c456be142470b978c" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421135 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-82v9l"] Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421483 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421510 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421537 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerName="extract-content" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421553 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerName="extract-content" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421577 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerName="extract-utilities" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421594 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerName="extract-utilities" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421611 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421627 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421648 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421663 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421687 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerName="extract-utilities" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421702 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerName="extract-utilities" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421720 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerName="extract-content" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421736 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerName="extract-content" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421775 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerName="extract-content" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421825 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerName="extract-content" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421853 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerName="extract-content" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421868 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerName="extract-content" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421888 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerName="extract-utilities" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421903 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerName="extract-utilities" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421920 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421954 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.421976 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c5042b-9158-4a46-b771-19f91eab097f" containerName="marketplace-operator" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.421991 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c5042b-9158-4a46-b771-19f91eab097f" containerName="marketplace-operator" Jan 20 09:22:40 crc kubenswrapper[4859]: E0120 09:22:40.422014 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerName="extract-utilities" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.422032 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerName="extract-utilities" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.422226 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.422253 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="2190970d-eb97-4db5-8cb2-ad14997411ab" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.422270 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c5042b-9158-4a46-b771-19f91eab097f" containerName="marketplace-operator" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.422296 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="662d5810-d101-40f8-9cf9-6e46d3177b6a" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.422319 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" containerName="registry-server" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.423843 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.426095 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.432442 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82v9l"] Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.491228 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515405e5-4607-4dac-84e9-3ac488a0e03d-catalog-content\") pod \"community-operators-82v9l\" (UID: \"515405e5-4607-4dac-84e9-3ac488a0e03d\") " pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.491303 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9bs\" (UniqueName: \"kubernetes.io/projected/515405e5-4607-4dac-84e9-3ac488a0e03d-kube-api-access-jm9bs\") pod \"community-operators-82v9l\" (UID: \"515405e5-4607-4dac-84e9-3ac488a0e03d\") " pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.491353 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515405e5-4607-4dac-84e9-3ac488a0e03d-utilities\") pod \"community-operators-82v9l\" (UID: \"515405e5-4607-4dac-84e9-3ac488a0e03d\") " pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.592166 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515405e5-4607-4dac-84e9-3ac488a0e03d-utilities\") pod \"community-operators-82v9l\" (UID: \"515405e5-4607-4dac-84e9-3ac488a0e03d\") " pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.592524 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515405e5-4607-4dac-84e9-3ac488a0e03d-catalog-content\") pod \"community-operators-82v9l\" (UID: \"515405e5-4607-4dac-84e9-3ac488a0e03d\") " pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.592629 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm9bs\" (UniqueName: \"kubernetes.io/projected/515405e5-4607-4dac-84e9-3ac488a0e03d-kube-api-access-jm9bs\") pod \"community-operators-82v9l\" (UID: \"515405e5-4607-4dac-84e9-3ac488a0e03d\") " pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.592619 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515405e5-4607-4dac-84e9-3ac488a0e03d-utilities\") pod \"community-operators-82v9l\" (UID: \"515405e5-4607-4dac-84e9-3ac488a0e03d\") " pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.593021 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515405e5-4607-4dac-84e9-3ac488a0e03d-catalog-content\") pod \"community-operators-82v9l\" (UID: \"515405e5-4607-4dac-84e9-3ac488a0e03d\") " pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.612081 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm9bs\" (UniqueName: \"kubernetes.io/projected/515405e5-4607-4dac-84e9-3ac488a0e03d-kube-api-access-jm9bs\") pod \"community-operators-82v9l\" (UID: \"515405e5-4607-4dac-84e9-3ac488a0e03d\") " pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:40 crc kubenswrapper[4859]: I0120 09:22:40.738379 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.011157 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vmf9d"] Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.012085 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.014223 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.024827 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmf9d"] Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.100977 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-catalog-content\") pod \"redhat-marketplace-vmf9d\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.101316 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-utilities\") pod \"redhat-marketplace-vmf9d\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.101427 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdq7k\" (UniqueName: \"kubernetes.io/projected/9a598f90-74de-4b63-88c4-74fea20109ca-kube-api-access-qdq7k\") pod \"redhat-marketplace-vmf9d\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.143998 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82v9l"] Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.202329 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-utilities\") pod \"redhat-marketplace-vmf9d\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.202765 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdq7k\" (UniqueName: \"kubernetes.io/projected/9a598f90-74de-4b63-88c4-74fea20109ca-kube-api-access-qdq7k\") pod \"redhat-marketplace-vmf9d\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.202831 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-utilities\") pod \"redhat-marketplace-vmf9d\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.202843 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-catalog-content\") pod \"redhat-marketplace-vmf9d\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.203130 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-catalog-content\") pod \"redhat-marketplace-vmf9d\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.221330 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdq7k\" (UniqueName: \"kubernetes.io/projected/9a598f90-74de-4b63-88c4-74fea20109ca-kube-api-access-qdq7k\") pod \"redhat-marketplace-vmf9d\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.340106 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.581532 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0ea750-41ef-4b4e-a574-2e50b3563f8b" path="/var/lib/kubelet/pods/0c0ea750-41ef-4b4e-a574-2e50b3563f8b/volumes" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.582374 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2190970d-eb97-4db5-8cb2-ad14997411ab" path="/var/lib/kubelet/pods/2190970d-eb97-4db5-8cb2-ad14997411ab/volumes" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.583149 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41" path="/var/lib/kubelet/pods/73b4a3fe-bbef-41d3-98a3-b7fa81b2ed41/volumes" Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.739266 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmf9d"] Jan 20 09:22:41 crc kubenswrapper[4859]: W0120 09:22:41.742226 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a598f90_74de_4b63_88c4_74fea20109ca.slice/crio-e16ad2cdd742abb69da668044b97e06c423b1ec8fe23851c686d503548a00efa WatchSource:0}: Error finding container e16ad2cdd742abb69da668044b97e06c423b1ec8fe23851c686d503548a00efa: Status 404 returned error can't find the container with id e16ad2cdd742abb69da668044b97e06c423b1ec8fe23851c686d503548a00efa Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.913939 4859 generic.go:334] "Generic (PLEG): container finished" podID="515405e5-4607-4dac-84e9-3ac488a0e03d" containerID="69d9250ea69b727c6d09eb40d9246ce7f0830de7a61deb329cd217daccdc20f1" exitCode=0 Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.914018 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v9l" event={"ID":"515405e5-4607-4dac-84e9-3ac488a0e03d","Type":"ContainerDied","Data":"69d9250ea69b727c6d09eb40d9246ce7f0830de7a61deb329cd217daccdc20f1"} Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.914043 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v9l" event={"ID":"515405e5-4607-4dac-84e9-3ac488a0e03d","Type":"ContainerStarted","Data":"8cdfce0714638c7836c16f20766098b20aa5239ee1eae152eb9da9698b3ce4fe"} Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.927736 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmf9d" event={"ID":"9a598f90-74de-4b63-88c4-74fea20109ca","Type":"ContainerStarted","Data":"e046dbdf3fcdd7bc0e27240c51384d930270d1316dadc4c576bb94a77e830f04"} Jan 20 09:22:41 crc kubenswrapper[4859]: I0120 09:22:41.927791 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmf9d" event={"ID":"9a598f90-74de-4b63-88c4-74fea20109ca","Type":"ContainerStarted","Data":"e16ad2cdd742abb69da668044b97e06c423b1ec8fe23851c686d503548a00efa"} Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.764342 4859 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.765971 4859 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.766111 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.766582 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af" gracePeriod=15 Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.766677 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc" gracePeriod=15 Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.766696 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3" gracePeriod=15 Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.766730 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79" gracePeriod=15 Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.766738 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9" gracePeriod=15 Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767479 4859 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:22:42 crc kubenswrapper[4859]: E0120 09:22:42.767708 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767721 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 09:22:42 crc kubenswrapper[4859]: E0120 09:22:42.767732 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767741 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 09:22:42 crc kubenswrapper[4859]: E0120 09:22:42.767749 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767755 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 09:22:42 crc kubenswrapper[4859]: E0120 09:22:42.767769 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767775 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 09:22:42 crc kubenswrapper[4859]: E0120 09:22:42.767794 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767799 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 09:22:42 crc kubenswrapper[4859]: E0120 09:22:42.767809 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767829 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 09:22:42 crc kubenswrapper[4859]: E0120 09:22:42.767838 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767844 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767932 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767943 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767951 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767963 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.767970 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.768150 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.826737 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.826808 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.826863 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.826965 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.827020 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.827244 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.827368 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.827503 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: E0120 09:22:42.853915 4859 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.32:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: E0120 09:22:42.882476 4859 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.32:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-vmf9d.188c660dbf5f558c openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-vmf9d,UID:9a598f90-74de-4b63-88c4-74fea20109ca,APIVersion:v1,ResourceVersion:30036,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 946ms (946ms including waiting). Image size: 1177725694 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:22:42.88106638 +0000 UTC m=+237.637082556,LastTimestamp:2026-01-20 09:22:42.88106638 +0000 UTC m=+237.637082556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928019 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928071 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928103 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928136 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928156 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928171 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928203 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928212 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928203 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928183 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928217 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928317 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928307 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928365 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928387 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.928442 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.934235 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.935449 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.936346 4859 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9" exitCode=0 Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.936368 4859 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc" exitCode=0 Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.936378 4859 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3" exitCode=0 Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.936386 4859 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79" exitCode=2 Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.936434 4859 scope.go:117] "RemoveContainer" containerID="667991bc321111735b4a3bdcc44572aca8b1f5122ddc82b7fa1c2f92215c91ce" Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.938262 4859 generic.go:334] "Generic (PLEG): container finished" podID="9a598f90-74de-4b63-88c4-74fea20109ca" containerID="e046dbdf3fcdd7bc0e27240c51384d930270d1316dadc4c576bb94a77e830f04" exitCode=0 Jan 20 09:22:42 crc kubenswrapper[4859]: I0120 09:22:42.938288 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmf9d" event={"ID":"9a598f90-74de-4b63-88c4-74fea20109ca","Type":"ContainerDied","Data":"e046dbdf3fcdd7bc0e27240c51384d930270d1316dadc4c576bb94a77e830f04"} Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.155226 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:43 crc kubenswrapper[4859]: W0120 09:22:43.170943 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-3d32f02c09d01f04c5c8eb311d574fa12bd5acfe5702353f5b1b3a2443a07047 WatchSource:0}: Error finding container 3d32f02c09d01f04c5c8eb311d574fa12bd5acfe5702353f5b1b3a2443a07047: Status 404 returned error can't find the container with id 3d32f02c09d01f04c5c8eb311d574fa12bd5acfe5702353f5b1b3a2443a07047 Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.946929 4859 generic.go:334] "Generic (PLEG): container finished" podID="9a598f90-74de-4b63-88c4-74fea20109ca" containerID="5474015a3deb8a964359f7a8b24e0c21560f1124b01c7a5137c21023ac10a136" exitCode=0 Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.947037 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmf9d" event={"ID":"9a598f90-74de-4b63-88c4-74fea20109ca","Type":"ContainerDied","Data":"5474015a3deb8a964359f7a8b24e0c21560f1124b01c7a5137c21023ac10a136"} Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.948131 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.949690 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9bc21b323ac86d422aec53c6cc1e7d08df94ef1fec74dccdb0307bba382619d9"} Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.949742 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3d32f02c09d01f04c5c8eb311d574fa12bd5acfe5702353f5b1b3a2443a07047"} Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.950449 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:43 crc kubenswrapper[4859]: E0120 09:22:43.950559 4859 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.32:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.952221 4859 generic.go:334] "Generic (PLEG): container finished" podID="515405e5-4607-4dac-84e9-3ac488a0e03d" containerID="baea8bbbfc4e278a6ed20bdc8d34a377003b4f332fecda4208cdd5196b0a6db4" exitCode=0 Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.952278 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v9l" event={"ID":"515405e5-4607-4dac-84e9-3ac488a0e03d","Type":"ContainerDied","Data":"baea8bbbfc4e278a6ed20bdc8d34a377003b4f332fecda4208cdd5196b0a6db4"} Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.952659 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.952960 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.958386 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.961737 4859 generic.go:334] "Generic (PLEG): container finished" podID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" containerID="8b89cd8b04557f694a8a312e779442c7a8883ed2fe976997110444d0c6d75ed5" exitCode=0 Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.961778 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785","Type":"ContainerDied","Data":"8b89cd8b04557f694a8a312e779442c7a8883ed2fe976997110444d0c6d75ed5"} Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.962378 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.963603 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:43 crc kubenswrapper[4859]: I0120 09:22:43.964340 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:44 crc kubenswrapper[4859]: I0120 09:22:44.995735 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmf9d" event={"ID":"9a598f90-74de-4b63-88c4-74fea20109ca","Type":"ContainerStarted","Data":"de7fbc0c70fe62804b955b2b6a04811f08861844adcc36f1350ab18ac78315bf"} Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.003032 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.003372 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.003789 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.145182 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.145845 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.146290 4859 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.146462 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.146717 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.147179 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.196195 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.196440 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.196486 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.196334 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.196739 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.196774 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.297920 4859 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.297947 4859 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.297956 4859 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.350657 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.351180 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.351686 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.352021 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.352250 4859 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.398512 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kube-api-access\") pod \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.398613 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kubelet-dir\") pod \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.398648 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-var-lock\") pod \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\" (UID: \"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785\") " Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.398727 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" (UID: "2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.398831 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-var-lock" (OuterVolumeSpecName: "var-lock") pod "2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" (UID: "2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.399292 4859 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.399319 4859 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.404702 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" (UID: "2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.500984 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.578331 4859 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.578718 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.578973 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.579138 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.579583 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: E0120 09:22:45.635918 4859 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: E0120 09:22:45.636523 4859 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: E0120 09:22:45.637124 4859 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: E0120 09:22:45.637340 4859 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: E0120 09:22:45.637504 4859 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:45 crc kubenswrapper[4859]: I0120 09:22:45.637531 4859 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 20 09:22:45 crc kubenswrapper[4859]: E0120 09:22:45.637835 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="200ms" Jan 20 09:22:45 crc kubenswrapper[4859]: E0120 09:22:45.839331 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="400ms" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.001343 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v9l" event={"ID":"515405e5-4607-4dac-84e9-3ac488a0e03d","Type":"ContainerStarted","Data":"a68f66b9524b565b3bff44b6aeb22c825ed26884ebef600555582349e090b283"} Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.002467 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.003040 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.003360 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.004008 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.004037 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785","Type":"ContainerDied","Data":"b66f1223df404aa91b469b818e9975ba9bce3bbb16f7217435bfe15a36d2e06d"} Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.005572 4859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b66f1223df404aa91b469b818e9975ba9bce3bbb16f7217435bfe15a36d2e06d" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.009100 4859 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af" exitCode=0 Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.009173 4859 scope.go:117] "RemoveContainer" containerID="8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.009827 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.010416 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.010629 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.010834 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.012239 4859 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.014045 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.014883 4859 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.015453 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.015902 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.016196 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.016690 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.017055 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.017313 4859 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.028503 4859 scope.go:117] "RemoveContainer" containerID="ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.044395 4859 scope.go:117] "RemoveContainer" containerID="6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.064934 4859 scope.go:117] "RemoveContainer" containerID="d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.077300 4859 scope.go:117] "RemoveContainer" containerID="a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.106835 4859 scope.go:117] "RemoveContainer" containerID="1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.132704 4859 scope.go:117] "RemoveContainer" containerID="8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9" Jan 20 09:22:46 crc kubenswrapper[4859]: E0120 09:22:46.187050 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\": container with ID starting with 8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9 not found: ID does not exist" containerID="8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.187099 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9"} err="failed to get container status \"8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\": rpc error: code = NotFound desc = could not find container \"8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9\": container with ID starting with 8d271e7f569a1ff7fab26fb3fe10bed0060a36452b119401e1b9c7a7591446e9 not found: ID does not exist" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.187127 4859 scope.go:117] "RemoveContainer" containerID="ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc" Jan 20 09:22:46 crc kubenswrapper[4859]: E0120 09:22:46.187565 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\": container with ID starting with ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc not found: ID does not exist" containerID="ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.187599 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc"} err="failed to get container status \"ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\": rpc error: code = NotFound desc = could not find container \"ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc\": container with ID starting with ef7752245057408c69afb191f567251feaddc16f69628c9c196ff074ffad8fdc not found: ID does not exist" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.187628 4859 scope.go:117] "RemoveContainer" containerID="6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3" Jan 20 09:22:46 crc kubenswrapper[4859]: E0120 09:22:46.188626 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\": container with ID starting with 6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3 not found: ID does not exist" containerID="6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.188687 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3"} err="failed to get container status \"6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\": rpc error: code = NotFound desc = could not find container \"6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3\": container with ID starting with 6342c9756c20b815f928b0d341129f5352bbb837372e97f3358493879fb0a7d3 not found: ID does not exist" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.188728 4859 scope.go:117] "RemoveContainer" containerID="d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79" Jan 20 09:22:46 crc kubenswrapper[4859]: E0120 09:22:46.189329 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\": container with ID starting with d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79 not found: ID does not exist" containerID="d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.189368 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79"} err="failed to get container status \"d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\": rpc error: code = NotFound desc = could not find container \"d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79\": container with ID starting with d31ccecaf1d1e4d78931bc71e6392fae8d63318a7889a4557a51703a85120e79 not found: ID does not exist" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.189396 4859 scope.go:117] "RemoveContainer" containerID="a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af" Jan 20 09:22:46 crc kubenswrapper[4859]: E0120 09:22:46.189882 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\": container with ID starting with a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af not found: ID does not exist" containerID="a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.189915 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af"} err="failed to get container status \"a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\": rpc error: code = NotFound desc = could not find container \"a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af\": container with ID starting with a9a8bf451a52588cfc2e097f8152e7d5e2fd7d46a96032e716ea8e70b08847af not found: ID does not exist" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.189936 4859 scope.go:117] "RemoveContainer" containerID="1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699" Jan 20 09:22:46 crc kubenswrapper[4859]: E0120 09:22:46.191154 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\": container with ID starting with 1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699 not found: ID does not exist" containerID="1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699" Jan 20 09:22:46 crc kubenswrapper[4859]: I0120 09:22:46.191206 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699"} err="failed to get container status \"1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\": rpc error: code = NotFound desc = could not find container \"1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699\": container with ID starting with 1c9d0afac69361f99849ac73a90efdde3558bb94654f36d49de183b1d9a6a699 not found: ID does not exist" Jan 20 09:22:46 crc kubenswrapper[4859]: E0120 09:22:46.240888 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="800ms" Jan 20 09:22:47 crc kubenswrapper[4859]: E0120 09:22:47.042375 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="1.6s" Jan 20 09:22:48 crc kubenswrapper[4859]: E0120 09:22:48.643741 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="3.2s" Jan 20 09:22:50 crc kubenswrapper[4859]: I0120 09:22:50.739752 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:50 crc kubenswrapper[4859]: I0120 09:22:50.740193 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:50 crc kubenswrapper[4859]: I0120 09:22:50.791471 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:50 crc kubenswrapper[4859]: I0120 09:22:50.792212 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:50 crc kubenswrapper[4859]: I0120 09:22:50.792956 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:50 crc kubenswrapper[4859]: I0120 09:22:50.793875 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:51 crc kubenswrapper[4859]: I0120 09:22:51.106824 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-82v9l" Jan 20 09:22:51 crc kubenswrapper[4859]: I0120 09:22:51.107605 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:51 crc kubenswrapper[4859]: I0120 09:22:51.108143 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:51 crc kubenswrapper[4859]: I0120 09:22:51.108627 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:51 crc kubenswrapper[4859]: I0120 09:22:51.340619 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:51 crc kubenswrapper[4859]: I0120 09:22:51.340662 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:51 crc kubenswrapper[4859]: I0120 09:22:51.405981 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:51 crc kubenswrapper[4859]: I0120 09:22:51.406948 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:51 crc kubenswrapper[4859]: I0120 09:22:51.407871 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:51 crc kubenswrapper[4859]: I0120 09:22:51.408625 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:51 crc kubenswrapper[4859]: E0120 09:22:51.657627 4859 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.32:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" volumeName="registry-storage" Jan 20 09:22:51 crc kubenswrapper[4859]: E0120 09:22:51.845362 4859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.32:6443: connect: connection refused" interval="6.4s" Jan 20 09:22:52 crc kubenswrapper[4859]: I0120 09:22:52.113663 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:22:52 crc kubenswrapper[4859]: I0120 09:22:52.115155 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:52 crc kubenswrapper[4859]: I0120 09:22:52.115397 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:52 crc kubenswrapper[4859]: I0120 09:22:52.115598 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:52 crc kubenswrapper[4859]: E0120 09:22:52.407252 4859 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.32:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-vmf9d.188c660dbf5f558c openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-vmf9d,UID:9a598f90-74de-4b63-88c4-74fea20109ca,APIVersion:v1,ResourceVersion:30036,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 946ms (946ms including waiting). Image size: 1177725694 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:22:42.88106638 +0000 UTC m=+237.637082556,LastTimestamp:2026-01-20 09:22:42.88106638 +0000 UTC m=+237.637082556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:22:53 crc kubenswrapper[4859]: I0120 09:22:53.573694 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:53 crc kubenswrapper[4859]: I0120 09:22:53.574913 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:53 crc kubenswrapper[4859]: I0120 09:22:53.575142 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:53 crc kubenswrapper[4859]: I0120 09:22:53.575554 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:53 crc kubenswrapper[4859]: I0120 09:22:53.590028 4859 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6321ebe3-45b9-45a2-b590-72495f7208a6" Jan 20 09:22:53 crc kubenswrapper[4859]: I0120 09:22:53.590126 4859 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6321ebe3-45b9-45a2-b590-72495f7208a6" Jan 20 09:22:53 crc kubenswrapper[4859]: E0120 09:22:53.590437 4859 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:53 crc kubenswrapper[4859]: I0120 09:22:53.591014 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:54 crc kubenswrapper[4859]: I0120 09:22:54.056004 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c8db3e70c5501305c370c62e03d8cdd40f47314426abefe407e56f83b78af88"} Jan 20 09:22:55 crc kubenswrapper[4859]: I0120 09:22:55.071323 4859 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a09cefbed21d2587bd246eb07e68558a73ae3f0bcd6f3ddf45804f634fde70db" exitCode=0 Jan 20 09:22:55 crc kubenswrapper[4859]: I0120 09:22:55.071399 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a09cefbed21d2587bd246eb07e68558a73ae3f0bcd6f3ddf45804f634fde70db"} Jan 20 09:22:55 crc kubenswrapper[4859]: I0120 09:22:55.071867 4859 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6321ebe3-45b9-45a2-b590-72495f7208a6" Jan 20 09:22:55 crc kubenswrapper[4859]: I0120 09:22:55.072943 4859 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6321ebe3-45b9-45a2-b590-72495f7208a6" Jan 20 09:22:55 crc kubenswrapper[4859]: I0120 09:22:55.072323 4859 status_manager.go:851] "Failed to get status for pod" podUID="515405e5-4607-4dac-84e9-3ac488a0e03d" pod="openshift-marketplace/community-operators-82v9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-82v9l\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:55 crc kubenswrapper[4859]: E0120 09:22:55.073425 4859 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:55 crc kubenswrapper[4859]: I0120 09:22:55.073622 4859 status_manager.go:851] "Failed to get status for pod" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:55 crc kubenswrapper[4859]: I0120 09:22:55.074140 4859 status_manager.go:851] "Failed to get status for pod" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" pod="openshift-marketplace/redhat-marketplace-vmf9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vmf9d\": dial tcp 38.102.83.32:6443: connect: connection refused" Jan 20 09:22:55 crc kubenswrapper[4859]: I0120 09:22:55.678860 4859 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 09:22:55 crc kubenswrapper[4859]: I0120 09:22:55.678920 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 09:22:56 crc kubenswrapper[4859]: I0120 09:22:56.082040 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3497ab15c0541b1779170e0785a03fccd170fb246cc5062370f6cde13c6a059e"} Jan 20 09:22:56 crc kubenswrapper[4859]: I0120 09:22:56.082362 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9aed8f0cd3322314448b20e09663a2a470a136307069ab1665b1682eebf5d0da"} Jan 20 09:22:56 crc kubenswrapper[4859]: I0120 09:22:56.082374 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ab16d04df3c73e7cba744b09b718537ab8a5d5eb7e06cd733105638ee1fb80c3"} Jan 20 09:22:56 crc kubenswrapper[4859]: I0120 09:22:56.082387 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6707e77b0d475b5f3a353b62d82d5fde3d2bfba4fe0522fdd453a4597fcfb4c8"} Jan 20 09:22:56 crc kubenswrapper[4859]: I0120 09:22:56.086483 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 09:22:56 crc kubenswrapper[4859]: I0120 09:22:56.086546 4859 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f" exitCode=1 Jan 20 09:22:56 crc kubenswrapper[4859]: I0120 09:22:56.086584 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f"} Jan 20 09:22:56 crc kubenswrapper[4859]: I0120 09:22:56.087094 4859 scope.go:117] "RemoveContainer" containerID="c018fb8cc3bc09b7c5af293d820dd4e31333e174b73be461263aa15f794e3a7f" Jan 20 09:22:57 crc kubenswrapper[4859]: I0120 09:22:57.094138 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e4f109d62fd0f3b19a10da2287bbc1701f7040dd6a484b979500ff39c0596782"} Jan 20 09:22:57 crc kubenswrapper[4859]: I0120 09:22:57.094317 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:57 crc kubenswrapper[4859]: I0120 09:22:57.094508 4859 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6321ebe3-45b9-45a2-b590-72495f7208a6" Jan 20 09:22:57 crc kubenswrapper[4859]: I0120 09:22:57.094542 4859 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6321ebe3-45b9-45a2-b590-72495f7208a6" Jan 20 09:22:57 crc kubenswrapper[4859]: I0120 09:22:57.097433 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 09:22:57 crc kubenswrapper[4859]: I0120 09:22:57.097517 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b714be3cd3b59fa235533e92ca0a79bc3f43b58d23cfb13313a68fea15750088"} Jan 20 09:22:58 crc kubenswrapper[4859]: I0120 09:22:58.591903 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:58 crc kubenswrapper[4859]: I0120 09:22:58.592867 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:22:58 crc kubenswrapper[4859]: I0120 09:22:58.596486 4859 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]log ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]etcd ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/generic-apiserver-start-informers ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/priority-and-fairness-filter ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/start-apiextensions-informers ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/start-apiextensions-controllers ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/crd-informer-synced ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/start-system-namespaces-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 20 09:22:58 crc kubenswrapper[4859]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/bootstrap-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/start-kube-aggregator-informers ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/apiservice-registration-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/apiservice-discovery-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]autoregister-completion ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/apiservice-openapi-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 20 09:22:58 crc kubenswrapper[4859]: livez check failed Jan 20 09:22:58 crc kubenswrapper[4859]: I0120 09:22:58.596534 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:23:02 crc kubenswrapper[4859]: I0120 09:23:02.414245 4859 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:23:02 crc kubenswrapper[4859]: I0120 09:23:02.473508 4859 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d73dcc5b-e231-401a-8cff-470b4d510ebf" Jan 20 09:23:03 crc kubenswrapper[4859]: I0120 09:23:03.136637 4859 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6321ebe3-45b9-45a2-b590-72495f7208a6" Jan 20 09:23:03 crc kubenswrapper[4859]: I0120 09:23:03.136680 4859 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6321ebe3-45b9-45a2-b590-72495f7208a6" Jan 20 09:23:03 crc kubenswrapper[4859]: I0120 09:23:03.139468 4859 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d73dcc5b-e231-401a-8cff-470b4d510ebf" Jan 20 09:23:04 crc kubenswrapper[4859]: I0120 09:23:04.312613 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:23:04 crc kubenswrapper[4859]: I0120 09:23:04.319362 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:23:05 crc kubenswrapper[4859]: I0120 09:23:05.147028 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:23:06 crc kubenswrapper[4859]: I0120 09:23:06.163405 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:23:12 crc kubenswrapper[4859]: I0120 09:23:12.399844 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 09:23:12 crc kubenswrapper[4859]: I0120 09:23:12.809202 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 09:23:13 crc kubenswrapper[4859]: I0120 09:23:13.248008 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 09:23:13 crc kubenswrapper[4859]: I0120 09:23:13.472635 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 09:23:13 crc kubenswrapper[4859]: I0120 09:23:13.564651 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 09:23:13 crc kubenswrapper[4859]: I0120 09:23:13.597062 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 09:23:13 crc kubenswrapper[4859]: I0120 09:23:13.840413 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 09:23:13 crc kubenswrapper[4859]: I0120 09:23:13.982107 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 09:23:14 crc kubenswrapper[4859]: I0120 09:23:14.012926 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 09:23:14 crc kubenswrapper[4859]: I0120 09:23:14.023166 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 09:23:14 crc kubenswrapper[4859]: I0120 09:23:14.330522 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 09:23:14 crc kubenswrapper[4859]: I0120 09:23:14.657146 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 09:23:14 crc kubenswrapper[4859]: I0120 09:23:14.842173 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 09:23:14 crc kubenswrapper[4859]: I0120 09:23:14.842836 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 09:23:14 crc kubenswrapper[4859]: I0120 09:23:14.886972 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.415683 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.439472 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.470924 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.580461 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.604694 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.605452 4859 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.712974 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.763485 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.903071 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.905026 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.933715 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.933719 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.968070 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.985099 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 09:23:15 crc kubenswrapper[4859]: I0120 09:23:15.991518 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 09:23:16 crc kubenswrapper[4859]: I0120 09:23:16.004987 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 09:23:16 crc kubenswrapper[4859]: I0120 09:23:16.105408 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 09:23:16 crc kubenswrapper[4859]: I0120 09:23:16.231262 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 09:23:16 crc kubenswrapper[4859]: I0120 09:23:16.558566 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 09:23:16 crc kubenswrapper[4859]: I0120 09:23:16.559119 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 09:23:16 crc kubenswrapper[4859]: I0120 09:23:16.567084 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 09:23:16 crc kubenswrapper[4859]: I0120 09:23:16.701443 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 09:23:16 crc kubenswrapper[4859]: I0120 09:23:16.823191 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 09:23:17 crc kubenswrapper[4859]: I0120 09:23:17.034675 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 09:23:17 crc kubenswrapper[4859]: I0120 09:23:17.181956 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 09:23:17 crc kubenswrapper[4859]: I0120 09:23:17.308419 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 09:23:17 crc kubenswrapper[4859]: I0120 09:23:17.548514 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 09:23:17 crc kubenswrapper[4859]: I0120 09:23:17.588470 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 09:23:17 crc kubenswrapper[4859]: I0120 09:23:17.677919 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.146201 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.156493 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.210212 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.290527 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.325141 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.358322 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.380301 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.389049 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.412705 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.498288 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.564044 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.668309 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.683071 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.698745 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.706952 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.741888 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.823035 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.834886 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.864576 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 09:23:18 crc kubenswrapper[4859]: I0120 09:23:18.936727 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.020506 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.042400 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.078288 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.181854 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.220891 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.243992 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.258308 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.316551 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.358022 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.363185 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.366907 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.531587 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.787797 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.821538 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.846502 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.906316 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.909628 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.924463 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.945122 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.959565 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 09:23:19 crc kubenswrapper[4859]: I0120 09:23:19.977064 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.029607 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.033336 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.046904 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.108346 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.134518 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.140469 4859 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.143767 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-82v9l" podStartSLOduration=37.171436442 podStartE2EDuration="40.143748145s" podCreationTimestamp="2026-01-20 09:22:40 +0000 UTC" firstStartedPulling="2026-01-20 09:22:41.918543777 +0000 UTC m=+236.674559953" lastFinishedPulling="2026-01-20 09:22:44.89085548 +0000 UTC m=+239.646871656" observedRunningTime="2026-01-20 09:23:02.471695285 +0000 UTC m=+257.227711471" watchObservedRunningTime="2026-01-20 09:23:20.143748145 +0000 UTC m=+274.899764361" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.146697 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vmf9d" podStartSLOduration=36.346368289 podStartE2EDuration="39.146686745s" podCreationTimestamp="2026-01-20 09:22:41 +0000 UTC" firstStartedPulling="2026-01-20 09:22:41.934893533 +0000 UTC m=+236.690909709" lastFinishedPulling="2026-01-20 09:22:44.735211989 +0000 UTC m=+239.491228165" observedRunningTime="2026-01-20 09:23:02.504465407 +0000 UTC m=+257.260481593" watchObservedRunningTime="2026-01-20 09:23:20.146686745 +0000 UTC m=+274.902702961" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.147045 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.147098 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.147528 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.152769 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.159284 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.182054 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.182028196 podStartE2EDuration="18.182028196s" podCreationTimestamp="2026-01-20 09:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:23:20.177174604 +0000 UTC m=+274.933190780" watchObservedRunningTime="2026-01-20 09:23:20.182028196 +0000 UTC m=+274.938044402" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.306840 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.590874 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.610098 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.625357 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.687176 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.734844 4859 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.740477 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.742683 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.765464 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.768177 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.817535 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.870905 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.917333 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.920085 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 09:23:20 crc kubenswrapper[4859]: I0120 09:23:20.973028 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.099022 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.170557 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.173665 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.389181 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.401946 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.436623 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.450963 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.494278 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.531997 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.599977 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.695215 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.847415 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.853667 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.870483 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 09:23:21 crc kubenswrapper[4859]: I0120 09:23:21.925929 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.062057 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.100691 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.173439 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.219191 4859 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.306114 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.311137 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.320983 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.336258 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.358323 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.365203 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.377666 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.486297 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.538442 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.592796 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.636274 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.675040 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.676430 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.687143 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.717192 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.769949 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.819164 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 09:23:22 crc kubenswrapper[4859]: I0120 09:23:22.843483 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.017671 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.028072 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.034326 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.109095 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.236924 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.280485 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.323769 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.334050 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.389924 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.397586 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.428128 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.430507 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.451666 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.456626 4859 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.490126 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.502956 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.536979 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.589944 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.599935 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.606080 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.749370 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.765748 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.839202 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.839661 4859 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.840063 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9bc21b323ac86d422aec53c6cc1e7d08df94ef1fec74dccdb0307bba382619d9" gracePeriod=5 Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.892321 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.894311 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.913196 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.923240 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 09:23:23 crc kubenswrapper[4859]: I0120 09:23:23.924909 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.051867 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.077883 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.079499 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.113722 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.127180 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.304088 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.379138 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.421528 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.455916 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.475718 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.540013 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.574446 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.610658 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.688187 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.793135 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.804152 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.829847 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 09:23:24 crc kubenswrapper[4859]: I0120 09:23:24.959727 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.018568 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.066672 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.177880 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.206874 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.214737 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.263508 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.414992 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.499860 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.500084 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.502847 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.519083 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.520236 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.529633 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.603971 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.641066 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.750537 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.799968 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.906161 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 09:23:25 crc kubenswrapper[4859]: I0120 09:23:25.944632 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.077318 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.149469 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.149517 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.269256 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.276872 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.279099 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.363856 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.426182 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.457420 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.565273 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.754273 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.791130 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.860358 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.877776 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 09:23:26 crc kubenswrapper[4859]: I0120 09:23:26.987859 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 09:23:27 crc kubenswrapper[4859]: I0120 09:23:27.005120 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 09:23:27 crc kubenswrapper[4859]: I0120 09:23:27.031117 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 09:23:27 crc kubenswrapper[4859]: I0120 09:23:27.180975 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 09:23:27 crc kubenswrapper[4859]: I0120 09:23:27.337140 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 09:23:27 crc kubenswrapper[4859]: I0120 09:23:27.536282 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 09:23:27 crc kubenswrapper[4859]: I0120 09:23:27.592229 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 09:23:27 crc kubenswrapper[4859]: I0120 09:23:27.752122 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 09:23:27 crc kubenswrapper[4859]: I0120 09:23:27.949519 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 09:23:28 crc kubenswrapper[4859]: I0120 09:23:28.052692 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 09:23:28 crc kubenswrapper[4859]: I0120 09:23:28.066959 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 09:23:28 crc kubenswrapper[4859]: I0120 09:23:28.499087 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 09:23:28 crc kubenswrapper[4859]: I0120 09:23:28.586970 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 09:23:28 crc kubenswrapper[4859]: I0120 09:23:28.598581 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.082475 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.148761 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.256221 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.256301 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.272065 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.320202 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.320309 4859 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9bc21b323ac86d422aec53c6cc1e7d08df94ef1fec74dccdb0307bba382619d9" exitCode=137 Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.416197 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.431642 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.431718 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.535904 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536016 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536078 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536108 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536139 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536213 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536220 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536280 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536239 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536648 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536907 4859 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536935 4859 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.536953 4859 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.547768 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.549134 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.586725 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.637746 4859 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.637776 4859 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.717017 4859 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 09:23:29 crc kubenswrapper[4859]: I0120 09:23:29.873316 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 09:23:30 crc kubenswrapper[4859]: I0120 09:23:30.202230 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 09:23:30 crc kubenswrapper[4859]: I0120 09:23:30.273721 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 09:23:30 crc kubenswrapper[4859]: I0120 09:23:30.340725 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 09:23:30 crc kubenswrapper[4859]: I0120 09:23:30.340910 4859 scope.go:117] "RemoveContainer" containerID="9bc21b323ac86d422aec53c6cc1e7d08df94ef1fec74dccdb0307bba382619d9" Jan 20 09:23:30 crc kubenswrapper[4859]: I0120 09:23:30.341190 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:23:30 crc kubenswrapper[4859]: I0120 09:23:30.349232 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 09:23:30 crc kubenswrapper[4859]: I0120 09:23:30.956637 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 09:23:45 crc kubenswrapper[4859]: I0120 09:23:45.416157 4859 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.068537 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86ctm"] Jan 20 09:23:54 crc kubenswrapper[4859]: E0120 09:23:54.069337 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.069356 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 09:23:54 crc kubenswrapper[4859]: E0120 09:23:54.069388 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" containerName="installer" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.069397 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" containerName="installer" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.069520 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be3f9e7-93ae-4a8d-9a9a-fca59c0b3785" containerName="installer" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.069534 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.070407 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.073214 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.083923 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86ctm"] Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.214740 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d12f022a-c217-4103-ab89-df75a522d16c-catalog-content\") pod \"certified-operators-86ctm\" (UID: \"d12f022a-c217-4103-ab89-df75a522d16c\") " pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.214810 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d12f022a-c217-4103-ab89-df75a522d16c-utilities\") pod \"certified-operators-86ctm\" (UID: \"d12f022a-c217-4103-ab89-df75a522d16c\") " pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.214836 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2sv\" (UniqueName: \"kubernetes.io/projected/d12f022a-c217-4103-ab89-df75a522d16c-kube-api-access-rb2sv\") pod \"certified-operators-86ctm\" (UID: \"d12f022a-c217-4103-ab89-df75a522d16c\") " pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.316274 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d12f022a-c217-4103-ab89-df75a522d16c-catalog-content\") pod \"certified-operators-86ctm\" (UID: \"d12f022a-c217-4103-ab89-df75a522d16c\") " pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.316346 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d12f022a-c217-4103-ab89-df75a522d16c-utilities\") pod \"certified-operators-86ctm\" (UID: \"d12f022a-c217-4103-ab89-df75a522d16c\") " pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.316378 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb2sv\" (UniqueName: \"kubernetes.io/projected/d12f022a-c217-4103-ab89-df75a522d16c-kube-api-access-rb2sv\") pod \"certified-operators-86ctm\" (UID: \"d12f022a-c217-4103-ab89-df75a522d16c\") " pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.317210 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d12f022a-c217-4103-ab89-df75a522d16c-catalog-content\") pod \"certified-operators-86ctm\" (UID: \"d12f022a-c217-4103-ab89-df75a522d16c\") " pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.317251 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d12f022a-c217-4103-ab89-df75a522d16c-utilities\") pod \"certified-operators-86ctm\" (UID: \"d12f022a-c217-4103-ab89-df75a522d16c\") " pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.336889 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb2sv\" (UniqueName: \"kubernetes.io/projected/d12f022a-c217-4103-ab89-df75a522d16c-kube-api-access-rb2sv\") pod \"certified-operators-86ctm\" (UID: \"d12f022a-c217-4103-ab89-df75a522d16c\") " pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.388286 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:23:54 crc kubenswrapper[4859]: I0120 09:23:54.906596 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86ctm"] Jan 20 09:23:55 crc kubenswrapper[4859]: I0120 09:23:55.495692 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ctm" event={"ID":"d12f022a-c217-4103-ab89-df75a522d16c","Type":"ContainerStarted","Data":"a9336e2da312e4ff4f6847a13e12e11fcd59d49323bf34b85532abc2025860ea"} Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.447364 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zvmjj"] Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.449112 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.451294 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.457867 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvmjj"] Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.502043 4859 generic.go:334] "Generic (PLEG): container finished" podID="d12f022a-c217-4103-ab89-df75a522d16c" containerID="018dfa0794a5ad38c2ab5c5ed48ece685d2b1799c4f87c85653c345e1e40da12" exitCode=0 Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.502092 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ctm" event={"ID":"d12f022a-c217-4103-ab89-df75a522d16c","Type":"ContainerDied","Data":"018dfa0794a5ad38c2ab5c5ed48ece685d2b1799c4f87c85653c345e1e40da12"} Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.545343 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80af50d4-59da-4499-b0a4-00da43e07f80-catalog-content\") pod \"redhat-operators-zvmjj\" (UID: \"80af50d4-59da-4499-b0a4-00da43e07f80\") " pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.545427 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lds5\" (UniqueName: \"kubernetes.io/projected/80af50d4-59da-4499-b0a4-00da43e07f80-kube-api-access-8lds5\") pod \"redhat-operators-zvmjj\" (UID: \"80af50d4-59da-4499-b0a4-00da43e07f80\") " pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.545482 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80af50d4-59da-4499-b0a4-00da43e07f80-utilities\") pod \"redhat-operators-zvmjj\" (UID: \"80af50d4-59da-4499-b0a4-00da43e07f80\") " pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.647075 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lds5\" (UniqueName: \"kubernetes.io/projected/80af50d4-59da-4499-b0a4-00da43e07f80-kube-api-access-8lds5\") pod \"redhat-operators-zvmjj\" (UID: \"80af50d4-59da-4499-b0a4-00da43e07f80\") " pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.647147 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80af50d4-59da-4499-b0a4-00da43e07f80-utilities\") pod \"redhat-operators-zvmjj\" (UID: \"80af50d4-59da-4499-b0a4-00da43e07f80\") " pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.647205 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80af50d4-59da-4499-b0a4-00da43e07f80-catalog-content\") pod \"redhat-operators-zvmjj\" (UID: \"80af50d4-59da-4499-b0a4-00da43e07f80\") " pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.647624 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80af50d4-59da-4499-b0a4-00da43e07f80-catalog-content\") pod \"redhat-operators-zvmjj\" (UID: \"80af50d4-59da-4499-b0a4-00da43e07f80\") " pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.648151 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80af50d4-59da-4499-b0a4-00da43e07f80-utilities\") pod \"redhat-operators-zvmjj\" (UID: \"80af50d4-59da-4499-b0a4-00da43e07f80\") " pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.670149 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lds5\" (UniqueName: \"kubernetes.io/projected/80af50d4-59da-4499-b0a4-00da43e07f80-kube-api-access-8lds5\") pod \"redhat-operators-zvmjj\" (UID: \"80af50d4-59da-4499-b0a4-00da43e07f80\") " pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:56 crc kubenswrapper[4859]: I0120 09:23:56.761540 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:23:57 crc kubenswrapper[4859]: I0120 09:23:57.185253 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvmjj"] Jan 20 09:23:57 crc kubenswrapper[4859]: I0120 09:23:57.509655 4859 generic.go:334] "Generic (PLEG): container finished" podID="80af50d4-59da-4499-b0a4-00da43e07f80" containerID="a087a1a231293ba9a0dac717128f1334c1b602cf6fce74f98438cf61799645e2" exitCode=0 Jan 20 09:23:57 crc kubenswrapper[4859]: I0120 09:23:57.509829 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvmjj" event={"ID":"80af50d4-59da-4499-b0a4-00da43e07f80","Type":"ContainerDied","Data":"a087a1a231293ba9a0dac717128f1334c1b602cf6fce74f98438cf61799645e2"} Jan 20 09:23:57 crc kubenswrapper[4859]: I0120 09:23:57.509995 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvmjj" event={"ID":"80af50d4-59da-4499-b0a4-00da43e07f80","Type":"ContainerStarted","Data":"347d9a62fd94a727f1296fe93bd8414a3a9be2e4538d4a624e39b1a745594287"} Jan 20 09:23:57 crc kubenswrapper[4859]: I0120 09:23:57.512372 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ctm" event={"ID":"d12f022a-c217-4103-ab89-df75a522d16c","Type":"ContainerStarted","Data":"8f74331b10854e662d99f0c95472bc8430eedcf03902b81a1d41cbd6146f41a0"} Jan 20 09:23:58 crc kubenswrapper[4859]: I0120 09:23:58.520638 4859 generic.go:334] "Generic (PLEG): container finished" podID="d12f022a-c217-4103-ab89-df75a522d16c" containerID="8f74331b10854e662d99f0c95472bc8430eedcf03902b81a1d41cbd6146f41a0" exitCode=0 Jan 20 09:23:58 crc kubenswrapper[4859]: I0120 09:23:58.520733 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ctm" event={"ID":"d12f022a-c217-4103-ab89-df75a522d16c","Type":"ContainerDied","Data":"8f74331b10854e662d99f0c95472bc8430eedcf03902b81a1d41cbd6146f41a0"} Jan 20 09:23:58 crc kubenswrapper[4859]: I0120 09:23:58.860700 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4whtw"] Jan 20 09:23:58 crc kubenswrapper[4859]: I0120 09:23:58.861959 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:58 crc kubenswrapper[4859]: I0120 09:23:58.873031 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4whtw"] Jan 20 09:23:58 crc kubenswrapper[4859]: I0120 09:23:58.976801 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29021fc-bea6-40b1-bb49-440f0225014f-catalog-content\") pod \"certified-operators-4whtw\" (UID: \"c29021fc-bea6-40b1-bb49-440f0225014f\") " pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:58 crc kubenswrapper[4859]: I0120 09:23:58.976925 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znnsc\" (UniqueName: \"kubernetes.io/projected/c29021fc-bea6-40b1-bb49-440f0225014f-kube-api-access-znnsc\") pod \"certified-operators-4whtw\" (UID: \"c29021fc-bea6-40b1-bb49-440f0225014f\") " pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:58 crc kubenswrapper[4859]: I0120 09:23:58.976996 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29021fc-bea6-40b1-bb49-440f0225014f-utilities\") pod \"certified-operators-4whtw\" (UID: \"c29021fc-bea6-40b1-bb49-440f0225014f\") " pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.078194 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29021fc-bea6-40b1-bb49-440f0225014f-catalog-content\") pod \"certified-operators-4whtw\" (UID: \"c29021fc-bea6-40b1-bb49-440f0225014f\") " pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.078246 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znnsc\" (UniqueName: \"kubernetes.io/projected/c29021fc-bea6-40b1-bb49-440f0225014f-kube-api-access-znnsc\") pod \"certified-operators-4whtw\" (UID: \"c29021fc-bea6-40b1-bb49-440f0225014f\") " pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.078261 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29021fc-bea6-40b1-bb49-440f0225014f-utilities\") pod \"certified-operators-4whtw\" (UID: \"c29021fc-bea6-40b1-bb49-440f0225014f\") " pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.078653 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29021fc-bea6-40b1-bb49-440f0225014f-catalog-content\") pod \"certified-operators-4whtw\" (UID: \"c29021fc-bea6-40b1-bb49-440f0225014f\") " pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.078698 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29021fc-bea6-40b1-bb49-440f0225014f-utilities\") pod \"certified-operators-4whtw\" (UID: \"c29021fc-bea6-40b1-bb49-440f0225014f\") " pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.104684 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znnsc\" (UniqueName: \"kubernetes.io/projected/c29021fc-bea6-40b1-bb49-440f0225014f-kube-api-access-znnsc\") pod \"certified-operators-4whtw\" (UID: \"c29021fc-bea6-40b1-bb49-440f0225014f\") " pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.180095 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.437953 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4whtw"] Jan 20 09:23:59 crc kubenswrapper[4859]: W0120 09:23:59.445434 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc29021fc_bea6_40b1_bb49_440f0225014f.slice/crio-b5334ad4ce23a217265f2ee784fbe2e69372c83ce3304cc1342aadfc326c6b02 WatchSource:0}: Error finding container b5334ad4ce23a217265f2ee784fbe2e69372c83ce3304cc1342aadfc326c6b02: Status 404 returned error can't find the container with id b5334ad4ce23a217265f2ee784fbe2e69372c83ce3304cc1342aadfc326c6b02 Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.526880 4859 generic.go:334] "Generic (PLEG): container finished" podID="80af50d4-59da-4499-b0a4-00da43e07f80" containerID="c6fbf9d41d917ef34a9463b277902341fdd0a40698ccea06f9846cf2790553be" exitCode=0 Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.526943 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvmjj" event={"ID":"80af50d4-59da-4499-b0a4-00da43e07f80","Type":"ContainerDied","Data":"c6fbf9d41d917ef34a9463b277902341fdd0a40698ccea06f9846cf2790553be"} Jan 20 09:23:59 crc kubenswrapper[4859]: I0120 09:23:59.530135 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4whtw" event={"ID":"c29021fc-bea6-40b1-bb49-440f0225014f","Type":"ContainerStarted","Data":"b5334ad4ce23a217265f2ee784fbe2e69372c83ce3304cc1342aadfc326c6b02"} Jan 20 09:24:00 crc kubenswrapper[4859]: I0120 09:24:00.547440 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ctm" event={"ID":"d12f022a-c217-4103-ab89-df75a522d16c","Type":"ContainerStarted","Data":"a070e3a177182ee57db0d033ea849dc68d861952ae0a7e26c9eb26d6593caad1"} Jan 20 09:24:00 crc kubenswrapper[4859]: I0120 09:24:00.557600 4859 generic.go:334] "Generic (PLEG): container finished" podID="c29021fc-bea6-40b1-bb49-440f0225014f" containerID="8f9cb550e5ea83198366816564e76ca4c2cb1e4c43d3f78bad29ddc8e35c165e" exitCode=0 Jan 20 09:24:00 crc kubenswrapper[4859]: I0120 09:24:00.557651 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4whtw" event={"ID":"c29021fc-bea6-40b1-bb49-440f0225014f","Type":"ContainerDied","Data":"8f9cb550e5ea83198366816564e76ca4c2cb1e4c43d3f78bad29ddc8e35c165e"} Jan 20 09:24:00 crc kubenswrapper[4859]: I0120 09:24:00.573463 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86ctm" podStartSLOduration=3.552798696 podStartE2EDuration="6.573434075s" podCreationTimestamp="2026-01-20 09:23:54 +0000 UTC" firstStartedPulling="2026-01-20 09:23:56.503363099 +0000 UTC m=+311.259379285" lastFinishedPulling="2026-01-20 09:23:59.523998488 +0000 UTC m=+314.280014664" observedRunningTime="2026-01-20 09:24:00.566548748 +0000 UTC m=+315.322564924" watchObservedRunningTime="2026-01-20 09:24:00.573434075 +0000 UTC m=+315.329450261" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.455380 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k7dlf"] Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.456655 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.470313 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7dlf"] Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.508803 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n2gx\" (UniqueName: \"kubernetes.io/projected/5643582d-eb19-4717-8f24-887e783a4533-kube-api-access-8n2gx\") pod \"redhat-operators-k7dlf\" (UID: \"5643582d-eb19-4717-8f24-887e783a4533\") " pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.508856 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5643582d-eb19-4717-8f24-887e783a4533-utilities\") pod \"redhat-operators-k7dlf\" (UID: \"5643582d-eb19-4717-8f24-887e783a4533\") " pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.508933 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5643582d-eb19-4717-8f24-887e783a4533-catalog-content\") pod \"redhat-operators-k7dlf\" (UID: \"5643582d-eb19-4717-8f24-887e783a4533\") " pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.564394 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvmjj" event={"ID":"80af50d4-59da-4499-b0a4-00da43e07f80","Type":"ContainerStarted","Data":"81d8375c39c1596f5f7d2126e7c8fc03d40fce5a0b6544991f780ffdb50fb795"} Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.585964 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zvmjj" podStartSLOduration=2.594133813 podStartE2EDuration="5.585948548s" podCreationTimestamp="2026-01-20 09:23:56 +0000 UTC" firstStartedPulling="2026-01-20 09:23:57.511425191 +0000 UTC m=+312.267441367" lastFinishedPulling="2026-01-20 09:24:00.503239926 +0000 UTC m=+315.259256102" observedRunningTime="2026-01-20 09:24:01.582280888 +0000 UTC m=+316.338297064" watchObservedRunningTime="2026-01-20 09:24:01.585948548 +0000 UTC m=+316.341964724" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.609735 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n2gx\" (UniqueName: \"kubernetes.io/projected/5643582d-eb19-4717-8f24-887e783a4533-kube-api-access-8n2gx\") pod \"redhat-operators-k7dlf\" (UID: \"5643582d-eb19-4717-8f24-887e783a4533\") " pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.610372 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5643582d-eb19-4717-8f24-887e783a4533-utilities\") pod \"redhat-operators-k7dlf\" (UID: \"5643582d-eb19-4717-8f24-887e783a4533\") " pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.610618 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5643582d-eb19-4717-8f24-887e783a4533-catalog-content\") pod \"redhat-operators-k7dlf\" (UID: \"5643582d-eb19-4717-8f24-887e783a4533\") " pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.611007 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5643582d-eb19-4717-8f24-887e783a4533-catalog-content\") pod \"redhat-operators-k7dlf\" (UID: \"5643582d-eb19-4717-8f24-887e783a4533\") " pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.611604 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5643582d-eb19-4717-8f24-887e783a4533-utilities\") pod \"redhat-operators-k7dlf\" (UID: \"5643582d-eb19-4717-8f24-887e783a4533\") " pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.627809 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n2gx\" (UniqueName: \"kubernetes.io/projected/5643582d-eb19-4717-8f24-887e783a4533-kube-api-access-8n2gx\") pod \"redhat-operators-k7dlf\" (UID: \"5643582d-eb19-4717-8f24-887e783a4533\") " pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:01 crc kubenswrapper[4859]: I0120 09:24:01.785146 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.183665 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7dlf"] Jan 20 09:24:02 crc kubenswrapper[4859]: W0120 09:24:02.192852 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5643582d_eb19_4717_8f24_887e783a4533.slice/crio-60a77d26bd0cb90a05e5c5f861f54ee243c999a04fd690ac7dc82eca074c05f9 WatchSource:0}: Error finding container 60a77d26bd0cb90a05e5c5f861f54ee243c999a04fd690ac7dc82eca074c05f9: Status 404 returned error can't find the container with id 60a77d26bd0cb90a05e5c5f861f54ee243c999a04fd690ac7dc82eca074c05f9 Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.448560 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6k2nj"] Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.451287 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.460546 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6k2nj"] Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.575976 4859 generic.go:334] "Generic (PLEG): container finished" podID="5643582d-eb19-4717-8f24-887e783a4533" containerID="a4217f8affb87f171463d4601b69497aa809286214c11a493bd712eb1b0389cc" exitCode=0 Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.576036 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7dlf" event={"ID":"5643582d-eb19-4717-8f24-887e783a4533","Type":"ContainerDied","Data":"a4217f8affb87f171463d4601b69497aa809286214c11a493bd712eb1b0389cc"} Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.576059 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7dlf" event={"ID":"5643582d-eb19-4717-8f24-887e783a4533","Type":"ContainerStarted","Data":"60a77d26bd0cb90a05e5c5f861f54ee243c999a04fd690ac7dc82eca074c05f9"} Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.581443 4859 generic.go:334] "Generic (PLEG): container finished" podID="c29021fc-bea6-40b1-bb49-440f0225014f" containerID="1649f6ce23b57f5e855307378f49cefdab646cca3ca82ac3da0c2a8b14387b4b" exitCode=0 Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.582834 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4whtw" event={"ID":"c29021fc-bea6-40b1-bb49-440f0225014f","Type":"ContainerDied","Data":"1649f6ce23b57f5e855307378f49cefdab646cca3ca82ac3da0c2a8b14387b4b"} Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.623406 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfrf\" (UniqueName: \"kubernetes.io/projected/ed631598-71cb-49af-9e49-e6bc8e4b2208-kube-api-access-5hfrf\") pod \"certified-operators-6k2nj\" (UID: \"ed631598-71cb-49af-9e49-e6bc8e4b2208\") " pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.623516 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed631598-71cb-49af-9e49-e6bc8e4b2208-catalog-content\") pod \"certified-operators-6k2nj\" (UID: \"ed631598-71cb-49af-9e49-e6bc8e4b2208\") " pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.623617 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed631598-71cb-49af-9e49-e6bc8e4b2208-utilities\") pod \"certified-operators-6k2nj\" (UID: \"ed631598-71cb-49af-9e49-e6bc8e4b2208\") " pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.724642 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed631598-71cb-49af-9e49-e6bc8e4b2208-catalog-content\") pod \"certified-operators-6k2nj\" (UID: \"ed631598-71cb-49af-9e49-e6bc8e4b2208\") " pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.724717 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed631598-71cb-49af-9e49-e6bc8e4b2208-utilities\") pod \"certified-operators-6k2nj\" (UID: \"ed631598-71cb-49af-9e49-e6bc8e4b2208\") " pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.724823 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hfrf\" (UniqueName: \"kubernetes.io/projected/ed631598-71cb-49af-9e49-e6bc8e4b2208-kube-api-access-5hfrf\") pod \"certified-operators-6k2nj\" (UID: \"ed631598-71cb-49af-9e49-e6bc8e4b2208\") " pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.726154 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed631598-71cb-49af-9e49-e6bc8e4b2208-utilities\") pod \"certified-operators-6k2nj\" (UID: \"ed631598-71cb-49af-9e49-e6bc8e4b2208\") " pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.726758 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed631598-71cb-49af-9e49-e6bc8e4b2208-catalog-content\") pod \"certified-operators-6k2nj\" (UID: \"ed631598-71cb-49af-9e49-e6bc8e4b2208\") " pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.745600 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hfrf\" (UniqueName: \"kubernetes.io/projected/ed631598-71cb-49af-9e49-e6bc8e4b2208-kube-api-access-5hfrf\") pod \"certified-operators-6k2nj\" (UID: \"ed631598-71cb-49af-9e49-e6bc8e4b2208\") " pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:02 crc kubenswrapper[4859]: I0120 09:24:02.804424 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:03 crc kubenswrapper[4859]: I0120 09:24:03.267773 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6k2nj"] Jan 20 09:24:03 crc kubenswrapper[4859]: I0120 09:24:03.586721 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k2nj" event={"ID":"ed631598-71cb-49af-9e49-e6bc8e4b2208","Type":"ContainerStarted","Data":"ff3ed3a8e1940662d5c438b5a484aa28c8edaaad2323a24757d8b6a0b0a8afd6"} Jan 20 09:24:03 crc kubenswrapper[4859]: I0120 09:24:03.854405 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-57hgx"] Jan 20 09:24:03 crc kubenswrapper[4859]: I0120 09:24:03.855883 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:03 crc kubenswrapper[4859]: I0120 09:24:03.865078 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57hgx"] Jan 20 09:24:03 crc kubenswrapper[4859]: I0120 09:24:03.942566 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqmtr\" (UniqueName: \"kubernetes.io/projected/d52b569d-e6cf-4afb-bb63-463e375b4e62-kube-api-access-gqmtr\") pod \"redhat-operators-57hgx\" (UID: \"d52b569d-e6cf-4afb-bb63-463e375b4e62\") " pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:03 crc kubenswrapper[4859]: I0120 09:24:03.942968 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52b569d-e6cf-4afb-bb63-463e375b4e62-catalog-content\") pod \"redhat-operators-57hgx\" (UID: \"d52b569d-e6cf-4afb-bb63-463e375b4e62\") " pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:03 crc kubenswrapper[4859]: I0120 09:24:03.943000 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52b569d-e6cf-4afb-bb63-463e375b4e62-utilities\") pod \"redhat-operators-57hgx\" (UID: \"d52b569d-e6cf-4afb-bb63-463e375b4e62\") " pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.044029 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqmtr\" (UniqueName: \"kubernetes.io/projected/d52b569d-e6cf-4afb-bb63-463e375b4e62-kube-api-access-gqmtr\") pod \"redhat-operators-57hgx\" (UID: \"d52b569d-e6cf-4afb-bb63-463e375b4e62\") " pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.044074 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52b569d-e6cf-4afb-bb63-463e375b4e62-catalog-content\") pod \"redhat-operators-57hgx\" (UID: \"d52b569d-e6cf-4afb-bb63-463e375b4e62\") " pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.044094 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52b569d-e6cf-4afb-bb63-463e375b4e62-utilities\") pod \"redhat-operators-57hgx\" (UID: \"d52b569d-e6cf-4afb-bb63-463e375b4e62\") " pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.044468 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52b569d-e6cf-4afb-bb63-463e375b4e62-utilities\") pod \"redhat-operators-57hgx\" (UID: \"d52b569d-e6cf-4afb-bb63-463e375b4e62\") " pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.044605 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52b569d-e6cf-4afb-bb63-463e375b4e62-catalog-content\") pod \"redhat-operators-57hgx\" (UID: \"d52b569d-e6cf-4afb-bb63-463e375b4e62\") " pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.064230 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqmtr\" (UniqueName: \"kubernetes.io/projected/d52b569d-e6cf-4afb-bb63-463e375b4e62-kube-api-access-gqmtr\") pod \"redhat-operators-57hgx\" (UID: \"d52b569d-e6cf-4afb-bb63-463e375b4e62\") " pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.171162 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.389600 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.389656 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.430112 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.594022 4859 generic.go:334] "Generic (PLEG): container finished" podID="ed631598-71cb-49af-9e49-e6bc8e4b2208" containerID="c3220f607393d1f4dc58257103ced2354136af5665f669343838569a18f4cc40" exitCode=0 Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.594130 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k2nj" event={"ID":"ed631598-71cb-49af-9e49-e6bc8e4b2208","Type":"ContainerDied","Data":"c3220f607393d1f4dc58257103ced2354136af5665f669343838569a18f4cc40"} Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.596752 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4whtw" event={"ID":"c29021fc-bea6-40b1-bb49-440f0225014f","Type":"ContainerStarted","Data":"052eafb5b62ddc04d51d65810a4472ad7d5cdb9275358739123497f740273651"} Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.637799 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4whtw" podStartSLOduration=3.217629181 podStartE2EDuration="6.637761944s" podCreationTimestamp="2026-01-20 09:23:58 +0000 UTC" firstStartedPulling="2026-01-20 09:24:00.559036363 +0000 UTC m=+315.315052539" lastFinishedPulling="2026-01-20 09:24:03.979169126 +0000 UTC m=+318.735185302" observedRunningTime="2026-01-20 09:24:04.63685501 +0000 UTC m=+319.392871216" watchObservedRunningTime="2026-01-20 09:24:04.637761944 +0000 UTC m=+319.393778110" Jan 20 09:24:04 crc kubenswrapper[4859]: I0120 09:24:04.655277 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86ctm" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.053575 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fq8mj"] Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.055334 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.056044 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c1ea29-76e0-4ee4-ac99-205c1a42832e-utilities\") pod \"certified-operators-fq8mj\" (UID: \"51c1ea29-76e0-4ee4-ac99-205c1a42832e\") " pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.056130 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c1ea29-76e0-4ee4-ac99-205c1a42832e-catalog-content\") pod \"certified-operators-fq8mj\" (UID: \"51c1ea29-76e0-4ee4-ac99-205c1a42832e\") " pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.056177 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzw76\" (UniqueName: \"kubernetes.io/projected/51c1ea29-76e0-4ee4-ac99-205c1a42832e-kube-api-access-rzw76\") pod \"certified-operators-fq8mj\" (UID: \"51c1ea29-76e0-4ee4-ac99-205c1a42832e\") " pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.066123 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq8mj"] Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.157204 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c1ea29-76e0-4ee4-ac99-205c1a42832e-catalog-content\") pod \"certified-operators-fq8mj\" (UID: \"51c1ea29-76e0-4ee4-ac99-205c1a42832e\") " pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.157283 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzw76\" (UniqueName: \"kubernetes.io/projected/51c1ea29-76e0-4ee4-ac99-205c1a42832e-kube-api-access-rzw76\") pod \"certified-operators-fq8mj\" (UID: \"51c1ea29-76e0-4ee4-ac99-205c1a42832e\") " pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.157352 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c1ea29-76e0-4ee4-ac99-205c1a42832e-utilities\") pod \"certified-operators-fq8mj\" (UID: \"51c1ea29-76e0-4ee4-ac99-205c1a42832e\") " pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.157950 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c1ea29-76e0-4ee4-ac99-205c1a42832e-utilities\") pod \"certified-operators-fq8mj\" (UID: \"51c1ea29-76e0-4ee4-ac99-205c1a42832e\") " pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.157975 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c1ea29-76e0-4ee4-ac99-205c1a42832e-catalog-content\") pod \"certified-operators-fq8mj\" (UID: \"51c1ea29-76e0-4ee4-ac99-205c1a42832e\") " pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.189084 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzw76\" (UniqueName: \"kubernetes.io/projected/51c1ea29-76e0-4ee4-ac99-205c1a42832e-kube-api-access-rzw76\") pod \"certified-operators-fq8mj\" (UID: \"51c1ea29-76e0-4ee4-ac99-205c1a42832e\") " pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:05 crc kubenswrapper[4859]: I0120 09:24:05.372056 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.076022 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57hgx"] Jan 20 09:24:06 crc kubenswrapper[4859]: W0120 09:24:06.081486 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c1ea29_76e0_4ee4_ac99_205c1a42832e.slice/crio-3e49c40ad09183e9c91ff22fef84f1f1c46342b0e0756ff135c3fe769eae491e WatchSource:0}: Error finding container 3e49c40ad09183e9c91ff22fef84f1f1c46342b0e0756ff135c3fe769eae491e: Status 404 returned error can't find the container with id 3e49c40ad09183e9c91ff22fef84f1f1c46342b0e0756ff135c3fe769eae491e Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.084261 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq8mj"] Jan 20 09:24:06 crc kubenswrapper[4859]: W0120 09:24:06.085377 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd52b569d_e6cf_4afb_bb63_463e375b4e62.slice/crio-7c1efc644cc8c6c11b028ec3514f10c0d2364f32481e936a63084495a6141c11 WatchSource:0}: Error finding container 7c1efc644cc8c6c11b028ec3514f10c0d2364f32481e936a63084495a6141c11: Status 404 returned error can't find the container with id 7c1efc644cc8c6c11b028ec3514f10c0d2364f32481e936a63084495a6141c11 Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.451495 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dfgvl"] Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.453100 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.461193 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dfgvl"] Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.476988 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qck7\" (UniqueName: \"kubernetes.io/projected/125927e0-768a-4ab9-b257-cb655ed95a2c-kube-api-access-4qck7\") pod \"redhat-operators-dfgvl\" (UID: \"125927e0-768a-4ab9-b257-cb655ed95a2c\") " pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.477027 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125927e0-768a-4ab9-b257-cb655ed95a2c-utilities\") pod \"redhat-operators-dfgvl\" (UID: \"125927e0-768a-4ab9-b257-cb655ed95a2c\") " pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.477117 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125927e0-768a-4ab9-b257-cb655ed95a2c-catalog-content\") pod \"redhat-operators-dfgvl\" (UID: \"125927e0-768a-4ab9-b257-cb655ed95a2c\") " pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.577872 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125927e0-768a-4ab9-b257-cb655ed95a2c-catalog-content\") pod \"redhat-operators-dfgvl\" (UID: \"125927e0-768a-4ab9-b257-cb655ed95a2c\") " pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.577936 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qck7\" (UniqueName: \"kubernetes.io/projected/125927e0-768a-4ab9-b257-cb655ed95a2c-kube-api-access-4qck7\") pod \"redhat-operators-dfgvl\" (UID: \"125927e0-768a-4ab9-b257-cb655ed95a2c\") " pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.577953 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125927e0-768a-4ab9-b257-cb655ed95a2c-utilities\") pod \"redhat-operators-dfgvl\" (UID: \"125927e0-768a-4ab9-b257-cb655ed95a2c\") " pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.578478 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125927e0-768a-4ab9-b257-cb655ed95a2c-catalog-content\") pod \"redhat-operators-dfgvl\" (UID: \"125927e0-768a-4ab9-b257-cb655ed95a2c\") " pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.578541 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125927e0-768a-4ab9-b257-cb655ed95a2c-utilities\") pod \"redhat-operators-dfgvl\" (UID: \"125927e0-768a-4ab9-b257-cb655ed95a2c\") " pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.615717 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qck7\" (UniqueName: \"kubernetes.io/projected/125927e0-768a-4ab9-b257-cb655ed95a2c-kube-api-access-4qck7\") pod \"redhat-operators-dfgvl\" (UID: \"125927e0-768a-4ab9-b257-cb655ed95a2c\") " pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.646194 4859 generic.go:334] "Generic (PLEG): container finished" podID="51c1ea29-76e0-4ee4-ac99-205c1a42832e" containerID="aa6e98a38c5949cf16b2b7ccdf660621193491d0bb448e764e6b8819288097e6" exitCode=0 Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.646463 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq8mj" event={"ID":"51c1ea29-76e0-4ee4-ac99-205c1a42832e","Type":"ContainerDied","Data":"aa6e98a38c5949cf16b2b7ccdf660621193491d0bb448e764e6b8819288097e6"} Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.646551 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq8mj" event={"ID":"51c1ea29-76e0-4ee4-ac99-205c1a42832e","Type":"ContainerStarted","Data":"3e49c40ad09183e9c91ff22fef84f1f1c46342b0e0756ff135c3fe769eae491e"} Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.649937 4859 generic.go:334] "Generic (PLEG): container finished" podID="d52b569d-e6cf-4afb-bb63-463e375b4e62" containerID="7db41b65a4622592241ecaf614ebaec74528a0af022e7e6f02f26e88ce2ff302" exitCode=0 Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.649998 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57hgx" event={"ID":"d52b569d-e6cf-4afb-bb63-463e375b4e62","Type":"ContainerDied","Data":"7db41b65a4622592241ecaf614ebaec74528a0af022e7e6f02f26e88ce2ff302"} Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.650022 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57hgx" event={"ID":"d52b569d-e6cf-4afb-bb63-463e375b4e62","Type":"ContainerStarted","Data":"7c1efc644cc8c6c11b028ec3514f10c0d2364f32481e936a63084495a6141c11"} Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.656413 4859 generic.go:334] "Generic (PLEG): container finished" podID="5643582d-eb19-4717-8f24-887e783a4533" containerID="1cf19a58d762ae6a4f0104617bfb1cdbf93d01238de63a8c5f283a979b805a93" exitCode=0 Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.656466 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7dlf" event={"ID":"5643582d-eb19-4717-8f24-887e783a4533","Type":"ContainerDied","Data":"1cf19a58d762ae6a4f0104617bfb1cdbf93d01238de63a8c5f283a979b805a93"} Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.762004 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.762067 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:24:06 crc kubenswrapper[4859]: I0120 09:24:06.802160 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.263772 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dfgvl"] Jan 20 09:24:07 crc kubenswrapper[4859]: W0120 09:24:07.278704 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod125927e0_768a_4ab9_b257_cb655ed95a2c.slice/crio-27c714bbc4f765d79c11c78495cce69796aba38b59cb00cd5a8192a1095d7fb8 WatchSource:0}: Error finding container 27c714bbc4f765d79c11c78495cce69796aba38b59cb00cd5a8192a1095d7fb8: Status 404 returned error can't find the container with id 27c714bbc4f765d79c11c78495cce69796aba38b59cb00cd5a8192a1095d7fb8 Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.317755 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wgt5x"] Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.318589 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.375845 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wgt5x"] Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.493743 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp69j\" (UniqueName: \"kubernetes.io/projected/8b5aebc7-90bf-4838-93c7-8f8071405753-kube-api-access-xp69j\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.493843 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8b5aebc7-90bf-4838-93c7-8f8071405753-registry-tls\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.493905 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8b5aebc7-90bf-4838-93c7-8f8071405753-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.493945 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8b5aebc7-90bf-4838-93c7-8f8071405753-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.493979 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.494004 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b5aebc7-90bf-4838-93c7-8f8071405753-trusted-ca\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.494030 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8b5aebc7-90bf-4838-93c7-8f8071405753-registry-certificates\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.494275 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b5aebc7-90bf-4838-93c7-8f8071405753-bound-sa-token\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.520134 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.595125 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b5aebc7-90bf-4838-93c7-8f8071405753-bound-sa-token\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.595175 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp69j\" (UniqueName: \"kubernetes.io/projected/8b5aebc7-90bf-4838-93c7-8f8071405753-kube-api-access-xp69j\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.595197 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8b5aebc7-90bf-4838-93c7-8f8071405753-registry-tls\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.595219 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8b5aebc7-90bf-4838-93c7-8f8071405753-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.595235 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8b5aebc7-90bf-4838-93c7-8f8071405753-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.595252 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b5aebc7-90bf-4838-93c7-8f8071405753-trusted-ca\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.595268 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8b5aebc7-90bf-4838-93c7-8f8071405753-registry-certificates\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.595878 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8b5aebc7-90bf-4838-93c7-8f8071405753-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.596465 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8b5aebc7-90bf-4838-93c7-8f8071405753-registry-certificates\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.597921 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b5aebc7-90bf-4838-93c7-8f8071405753-trusted-ca\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.601461 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8b5aebc7-90bf-4838-93c7-8f8071405753-registry-tls\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.605559 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8b5aebc7-90bf-4838-93c7-8f8071405753-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.613229 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp69j\" (UniqueName: \"kubernetes.io/projected/8b5aebc7-90bf-4838-93c7-8f8071405753-kube-api-access-xp69j\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.624082 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b5aebc7-90bf-4838-93c7-8f8071405753-bound-sa-token\") pod \"image-registry-66df7c8f76-wgt5x\" (UID: \"8b5aebc7-90bf-4838-93c7-8f8071405753\") " pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.665542 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.667695 4859 generic.go:334] "Generic (PLEG): container finished" podID="ed631598-71cb-49af-9e49-e6bc8e4b2208" containerID="7330f84df6bac3194682e6d55e502617003efc2559c9f6b7982c43fe93728fa8" exitCode=0 Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.667872 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k2nj" event={"ID":"ed631598-71cb-49af-9e49-e6bc8e4b2208","Type":"ContainerDied","Data":"7330f84df6bac3194682e6d55e502617003efc2559c9f6b7982c43fe93728fa8"} Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.672305 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7dlf" event={"ID":"5643582d-eb19-4717-8f24-887e783a4533","Type":"ContainerStarted","Data":"d527c6e6e08850c32e1270cd43992472185c5199ae22a5efc36c0c63f7a60254"} Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.677681 4859 generic.go:334] "Generic (PLEG): container finished" podID="125927e0-768a-4ab9-b257-cb655ed95a2c" containerID="c9a453e29710d2b446f6c6717f2c8984a8fe546c28870207007c1b0786bb2276" exitCode=0 Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.677720 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfgvl" event={"ID":"125927e0-768a-4ab9-b257-cb655ed95a2c","Type":"ContainerDied","Data":"c9a453e29710d2b446f6c6717f2c8984a8fe546c28870207007c1b0786bb2276"} Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.677741 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfgvl" event={"ID":"125927e0-768a-4ab9-b257-cb655ed95a2c","Type":"ContainerStarted","Data":"27c714bbc4f765d79c11c78495cce69796aba38b59cb00cd5a8192a1095d7fb8"} Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.733070 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k7dlf" podStartSLOduration=2.116622801 podStartE2EDuration="6.733052204s" podCreationTimestamp="2026-01-20 09:24:01 +0000 UTC" firstStartedPulling="2026-01-20 09:24:02.577326586 +0000 UTC m=+317.333342762" lastFinishedPulling="2026-01-20 09:24:07.193755989 +0000 UTC m=+321.949772165" observedRunningTime="2026-01-20 09:24:07.729300441 +0000 UTC m=+322.485316637" watchObservedRunningTime="2026-01-20 09:24:07.733052204 +0000 UTC m=+322.489068380" Jan 20 09:24:07 crc kubenswrapper[4859]: I0120 09:24:07.802905 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zvmjj" podUID="80af50d4-59da-4499-b0a4-00da43e07f80" containerName="registry-server" probeResult="failure" output=< Jan 20 09:24:07 crc kubenswrapper[4859]: timeout: failed to connect service ":50051" within 1s Jan 20 09:24:07 crc kubenswrapper[4859]: > Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.126906 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wgt5x"] Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.685079 4859 generic.go:334] "Generic (PLEG): container finished" podID="51c1ea29-76e0-4ee4-ac99-205c1a42832e" containerID="d3b251fb41a3ff926ec87eb17adc50a26b3d3f0594765edaf8b8062350990dce" exitCode=0 Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.685176 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq8mj" event={"ID":"51c1ea29-76e0-4ee4-ac99-205c1a42832e","Type":"ContainerDied","Data":"d3b251fb41a3ff926ec87eb17adc50a26b3d3f0594765edaf8b8062350990dce"} Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.688100 4859 generic.go:334] "Generic (PLEG): container finished" podID="d52b569d-e6cf-4afb-bb63-463e375b4e62" containerID="9b84f5573f9064ee2e2d14b954e17b67307f8c4b4acd032a605a08b7d7f76acb" exitCode=0 Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.688154 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57hgx" event={"ID":"d52b569d-e6cf-4afb-bb63-463e375b4e62","Type":"ContainerDied","Data":"9b84f5573f9064ee2e2d14b954e17b67307f8c4b4acd032a605a08b7d7f76acb"} Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.690260 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" event={"ID":"8b5aebc7-90bf-4838-93c7-8f8071405753","Type":"ContainerStarted","Data":"353f78c473b2600af6e92641fd01d54726d920765c0c7712a6c0a765aa33a0f6"} Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.690284 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" event={"ID":"8b5aebc7-90bf-4838-93c7-8f8071405753","Type":"ContainerStarted","Data":"c4a83be3209f4b9a25a46a1dbcc5d51966140cf34343861ede65bfbac5cb157a"} Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.690642 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.693294 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k2nj" event={"ID":"ed631598-71cb-49af-9e49-e6bc8e4b2208","Type":"ContainerStarted","Data":"998f91af0bbcc696c5f9184266744732e734814c2c0aaf24485de1dd9234736f"} Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.758225 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" podStartSLOduration=1.7582059 podStartE2EDuration="1.7582059s" podCreationTimestamp="2026-01-20 09:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:24:08.756710779 +0000 UTC m=+323.512726965" watchObservedRunningTime="2026-01-20 09:24:08.7582059 +0000 UTC m=+323.514222086" Jan 20 09:24:08 crc kubenswrapper[4859]: I0120 09:24:08.779284 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6k2nj" podStartSLOduration=4.153523383 podStartE2EDuration="6.779266033s" podCreationTimestamp="2026-01-20 09:24:02 +0000 UTC" firstStartedPulling="2026-01-20 09:24:05.604903374 +0000 UTC m=+320.360919570" lastFinishedPulling="2026-01-20 09:24:08.230646034 +0000 UTC m=+322.986662220" observedRunningTime="2026-01-20 09:24:08.776386945 +0000 UTC m=+323.532403151" watchObservedRunningTime="2026-01-20 09:24:08.779266033 +0000 UTC m=+323.535282209" Jan 20 09:24:09 crc kubenswrapper[4859]: I0120 09:24:09.181175 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:24:09 crc kubenswrapper[4859]: I0120 09:24:09.181666 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:24:09 crc kubenswrapper[4859]: I0120 09:24:09.257975 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:24:09 crc kubenswrapper[4859]: I0120 09:24:09.700754 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq8mj" event={"ID":"51c1ea29-76e0-4ee4-ac99-205c1a42832e","Type":"ContainerStarted","Data":"1c80a23778eef79a84e2c1f4f553684a25ee4a32c5b0af882d6b2bb5c641a294"} Jan 20 09:24:09 crc kubenswrapper[4859]: I0120 09:24:09.704400 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57hgx" event={"ID":"d52b569d-e6cf-4afb-bb63-463e375b4e62","Type":"ContainerStarted","Data":"2f8d04ed4c9e603e5473a30dfd5a0a9764783bfe80d54ae64edcf53452afe53c"} Jan 20 09:24:09 crc kubenswrapper[4859]: I0120 09:24:09.706249 4859 generic.go:334] "Generic (PLEG): container finished" podID="125927e0-768a-4ab9-b257-cb655ed95a2c" containerID="de1dbb6c7180e3820ea43a782a3e500d588406bc52c9a82fc556b65abf39e140" exitCode=0 Jan 20 09:24:09 crc kubenswrapper[4859]: I0120 09:24:09.706344 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfgvl" event={"ID":"125927e0-768a-4ab9-b257-cb655ed95a2c","Type":"ContainerDied","Data":"de1dbb6c7180e3820ea43a782a3e500d588406bc52c9a82fc556b65abf39e140"} Jan 20 09:24:09 crc kubenswrapper[4859]: I0120 09:24:09.756267 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fq8mj" podStartSLOduration=2.014790903 podStartE2EDuration="4.75624635s" podCreationTimestamp="2026-01-20 09:24:05 +0000 UTC" firstStartedPulling="2026-01-20 09:24:06.647500145 +0000 UTC m=+321.403516321" lastFinishedPulling="2026-01-20 09:24:09.388955592 +0000 UTC m=+324.144971768" observedRunningTime="2026-01-20 09:24:09.734250661 +0000 UTC m=+324.490266867" watchObservedRunningTime="2026-01-20 09:24:09.75624635 +0000 UTC m=+324.512262526" Jan 20 09:24:09 crc kubenswrapper[4859]: I0120 09:24:09.778128 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-57hgx" podStartSLOduration=4.260645469 podStartE2EDuration="6.778111465s" podCreationTimestamp="2026-01-20 09:24:03 +0000 UTC" firstStartedPulling="2026-01-20 09:24:06.652251984 +0000 UTC m=+321.408268160" lastFinishedPulling="2026-01-20 09:24:09.16971798 +0000 UTC m=+323.925734156" observedRunningTime="2026-01-20 09:24:09.754567724 +0000 UTC m=+324.510583890" watchObservedRunningTime="2026-01-20 09:24:09.778111465 +0000 UTC m=+324.534127641" Jan 20 09:24:09 crc kubenswrapper[4859]: I0120 09:24:09.779098 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4whtw" Jan 20 09:24:10 crc kubenswrapper[4859]: I0120 09:24:10.048396 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:24:10 crc kubenswrapper[4859]: I0120 09:24:10.048454 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:24:10 crc kubenswrapper[4859]: I0120 09:24:10.722483 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfgvl" event={"ID":"125927e0-768a-4ab9-b257-cb655ed95a2c","Type":"ContainerStarted","Data":"c0245be44ac1de2bcc7e878c41e728d4ed70ccde741d2c8b455d2fad356ea5ed"} Jan 20 09:24:10 crc kubenswrapper[4859]: I0120 09:24:10.751494 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dfgvl" podStartSLOduration=2.017510529 podStartE2EDuration="4.751474373s" podCreationTimestamp="2026-01-20 09:24:06 +0000 UTC" firstStartedPulling="2026-01-20 09:24:07.683943488 +0000 UTC m=+322.439959664" lastFinishedPulling="2026-01-20 09:24:10.417907322 +0000 UTC m=+325.173923508" observedRunningTime="2026-01-20 09:24:10.742841048 +0000 UTC m=+325.498857234" watchObservedRunningTime="2026-01-20 09:24:10.751474373 +0000 UTC m=+325.507490559" Jan 20 09:24:11 crc kubenswrapper[4859]: I0120 09:24:11.786163 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:11 crc kubenswrapper[4859]: I0120 09:24:11.786234 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:12 crc kubenswrapper[4859]: I0120 09:24:12.805551 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:12 crc kubenswrapper[4859]: I0120 09:24:12.806700 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:12 crc kubenswrapper[4859]: I0120 09:24:12.835058 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k7dlf" podUID="5643582d-eb19-4717-8f24-887e783a4533" containerName="registry-server" probeResult="failure" output=< Jan 20 09:24:12 crc kubenswrapper[4859]: timeout: failed to connect service ":50051" within 1s Jan 20 09:24:12 crc kubenswrapper[4859]: > Jan 20 09:24:12 crc kubenswrapper[4859]: I0120 09:24:12.854984 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:13 crc kubenswrapper[4859]: I0120 09:24:13.770334 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6k2nj" Jan 20 09:24:14 crc kubenswrapper[4859]: I0120 09:24:14.171675 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:14 crc kubenswrapper[4859]: I0120 09:24:14.171830 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:15 crc kubenswrapper[4859]: I0120 09:24:15.218365 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57hgx" podUID="d52b569d-e6cf-4afb-bb63-463e375b4e62" containerName="registry-server" probeResult="failure" output=< Jan 20 09:24:15 crc kubenswrapper[4859]: timeout: failed to connect service ":50051" within 1s Jan 20 09:24:15 crc kubenswrapper[4859]: > Jan 20 09:24:15 crc kubenswrapper[4859]: I0120 09:24:15.372606 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:15 crc kubenswrapper[4859]: I0120 09:24:15.372669 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:15 crc kubenswrapper[4859]: I0120 09:24:15.418306 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:15 crc kubenswrapper[4859]: I0120 09:24:15.816367 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fq8mj" Jan 20 09:24:16 crc kubenswrapper[4859]: I0120 09:24:16.802829 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:16 crc kubenswrapper[4859]: I0120 09:24:16.803928 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:16 crc kubenswrapper[4859]: I0120 09:24:16.819684 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:24:16 crc kubenswrapper[4859]: I0120 09:24:16.862739 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zvmjj" Jan 20 09:24:16 crc kubenswrapper[4859]: I0120 09:24:16.867353 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:17 crc kubenswrapper[4859]: I0120 09:24:17.834165 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dfgvl" Jan 20 09:24:21 crc kubenswrapper[4859]: I0120 09:24:21.863408 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:21 crc kubenswrapper[4859]: I0120 09:24:21.935591 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k7dlf" Jan 20 09:24:25 crc kubenswrapper[4859]: I0120 09:24:24.234357 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:25 crc kubenswrapper[4859]: I0120 09:24:24.308411 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-57hgx" Jan 20 09:24:27 crc kubenswrapper[4859]: I0120 09:24:27.678530 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-wgt5x" Jan 20 09:24:27 crc kubenswrapper[4859]: I0120 09:24:27.773914 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxvwj"] Jan 20 09:24:40 crc kubenswrapper[4859]: I0120 09:24:40.048506 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:24:40 crc kubenswrapper[4859]: I0120 09:24:40.049254 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:24:52 crc kubenswrapper[4859]: I0120 09:24:52.868212 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" podUID="8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" containerName="registry" containerID="cri-o://8e1935398314eb518606ac13d1c632abeed0e3592a46af104d06041814e7e41c" gracePeriod=30 Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.014088 4859 generic.go:334] "Generic (PLEG): container finished" podID="8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" containerID="8e1935398314eb518606ac13d1c632abeed0e3592a46af104d06041814e7e41c" exitCode=0 Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.014148 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" event={"ID":"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96","Type":"ContainerDied","Data":"8e1935398314eb518606ac13d1c632abeed0e3592a46af104d06041814e7e41c"} Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.277557 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.387517 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-certificates\") pod \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.387646 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-ca-trust-extracted\") pod \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.388093 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.388181 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-bound-sa-token\") pod \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.388266 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-installation-pull-secrets\") pod \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.388330 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-trusted-ca\") pod \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.388365 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-tls\") pod \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.388384 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq7lf\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-kube-api-access-dq7lf\") pod \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\" (UID: \"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96\") " Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.388618 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.390098 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.396064 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.396454 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-kube-api-access-dq7lf" (OuterVolumeSpecName: "kube-api-access-dq7lf") pod "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96"). InnerVolumeSpecName "kube-api-access-dq7lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.396893 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.402232 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.404334 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.407683 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" (UID: "8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.489481 4859 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.489527 4859 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.489540 4859 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.489551 4859 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.489563 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq7lf\" (UniqueName: \"kubernetes.io/projected/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-kube-api-access-dq7lf\") on node \"crc\" DevicePath \"\"" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.489573 4859 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 09:24:53 crc kubenswrapper[4859]: I0120 09:24:53.489584 4859 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 09:24:54 crc kubenswrapper[4859]: I0120 09:24:54.024325 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" event={"ID":"8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96","Type":"ContainerDied","Data":"0cc6c973622d320792dd02208ece4ea50bf328b5c9f0ab5da60b168ead7b61e2"} Jan 20 09:24:54 crc kubenswrapper[4859]: I0120 09:24:54.025024 4859 scope.go:117] "RemoveContainer" containerID="8e1935398314eb518606ac13d1c632abeed0e3592a46af104d06041814e7e41c" Jan 20 09:24:54 crc kubenswrapper[4859]: I0120 09:24:54.024600 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hxvwj" Jan 20 09:24:54 crc kubenswrapper[4859]: I0120 09:24:54.055910 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxvwj"] Jan 20 09:24:54 crc kubenswrapper[4859]: I0120 09:24:54.065324 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxvwj"] Jan 20 09:24:55 crc kubenswrapper[4859]: I0120 09:24:55.585877 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" path="/var/lib/kubelet/pods/8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96/volumes" Jan 20 09:25:10 crc kubenswrapper[4859]: I0120 09:25:10.048928 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:25:10 crc kubenswrapper[4859]: I0120 09:25:10.049507 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:25:10 crc kubenswrapper[4859]: I0120 09:25:10.049589 4859 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:25:10 crc kubenswrapper[4859]: I0120 09:25:10.050521 4859 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f87b07074dc2de3606419bc083b72c6305e6f872bdad3c5497ae793d5e33db4"} pod="openshift-machine-config-operator/machine-config-daemon-knvgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:25:10 crc kubenswrapper[4859]: I0120 09:25:10.050636 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" containerID="cri-o://1f87b07074dc2de3606419bc083b72c6305e6f872bdad3c5497ae793d5e33db4" gracePeriod=600 Jan 20 09:25:11 crc kubenswrapper[4859]: I0120 09:25:11.161384 4859 generic.go:334] "Generic (PLEG): container finished" podID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerID="1f87b07074dc2de3606419bc083b72c6305e6f872bdad3c5497ae793d5e33db4" exitCode=0 Jan 20 09:25:11 crc kubenswrapper[4859]: I0120 09:25:11.161494 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerDied","Data":"1f87b07074dc2de3606419bc083b72c6305e6f872bdad3c5497ae793d5e33db4"} Jan 20 09:25:11 crc kubenswrapper[4859]: I0120 09:25:11.161776 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerStarted","Data":"c4f1b1333bee42b774a64c7b97e32fd3c79bdf7bb73ec034abbbed56ca3dcb79"} Jan 20 09:25:11 crc kubenswrapper[4859]: I0120 09:25:11.161840 4859 scope.go:117] "RemoveContainer" containerID="d419f48ea5607cb989d27157610b3c03f776fa948b9e2a1f01adad08dd396845" Jan 20 09:27:10 crc kubenswrapper[4859]: I0120 09:27:10.048280 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:27:10 crc kubenswrapper[4859]: I0120 09:27:10.048950 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:27:40 crc kubenswrapper[4859]: I0120 09:27:40.048596 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:27:40 crc kubenswrapper[4859]: I0120 09:27:40.050011 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.107003 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rhpfn"] Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.107522 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovn-controller" containerID="cri-o://9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee" gracePeriod=30 Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.107614 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="nbdb" containerID="cri-o://584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba" gracePeriod=30 Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.107698 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7" gracePeriod=30 Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.107744 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="kube-rbac-proxy-node" containerID="cri-o://a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52" gracePeriod=30 Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.107812 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovn-acl-logging" containerID="cri-o://6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee" gracePeriod=30 Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.107672 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="northd" containerID="cri-o://f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b" gracePeriod=30 Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.108179 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="sbdb" containerID="cri-o://8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e" gracePeriod=30 Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.159994 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" containerID="cri-o://70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289" gracePeriod=30 Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.872865 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/3.log" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.875415 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovn-acl-logging/0.log" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.875974 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovn-controller/0.log" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.876572 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929000 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-node-log\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929055 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-ovn\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929086 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-ovn-kubernetes\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929142 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-node-log" (OuterVolumeSpecName: "node-log") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929172 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929204 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-slash\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929215 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929232 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-slash" (OuterVolumeSpecName: "host-slash") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929252 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-bin\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929282 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-log-socket\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929338 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929340 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-config\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929376 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-netns\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929404 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-etc-openvswitch\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929481 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-script-lib\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929521 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-systemd-units\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929551 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92hv4\" (UniqueName: \"kubernetes.io/projected/cfe04730-660d-4e59-8b5e-15e94d72990f-kube-api-access-92hv4\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929585 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-kubelet\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929613 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfe04730-660d-4e59-8b5e-15e94d72990f-ovn-node-metrics-cert\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.930641 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-openvswitch\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.930669 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-systemd\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.930687 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-netd\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.930708 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-var-lib-openvswitch\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929643 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-log-socket" (OuterVolumeSpecName: "log-socket") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929686 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929712 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929715 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.929745 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.930083 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.930237 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.930754 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-env-overrides\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.930730 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.930883 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cfe04730-660d-4e59-8b5e-15e94d72990f\" (UID: \"cfe04730-660d-4e59-8b5e-15e94d72990f\") " Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.930982 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931129 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931123 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931186 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931286 4859 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-node-log\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931556 4859 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931580 4859 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931627 4859 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931687 4859 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931774 4859 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931870 4859 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.931889 4859 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.932154 4859 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.932219 4859 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.932240 4859 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.932340 4859 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.932359 4859 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.932403 4859 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.936843 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe04730-660d-4e59-8b5e-15e94d72990f-kube-api-access-92hv4" (OuterVolumeSpecName: "kube-api-access-92hv4") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "kube-api-access-92hv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.937198 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe04730-660d-4e59-8b5e-15e94d72990f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940523 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrpd2"] Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940772 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" containerName="registry" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940809 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" containerName="registry" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940821 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940829 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940842 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940849 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940860 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940867 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940880 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovn-acl-logging" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940887 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovn-acl-logging" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940899 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="kube-rbac-proxy-node" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940907 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="kube-rbac-proxy-node" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940918 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="nbdb" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940924 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="nbdb" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940931 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovn-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940938 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovn-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940948 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="northd" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940956 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="northd" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940971 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940978 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.940990 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="kubecfg-setup" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.940997 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="kubecfg-setup" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.941008 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="sbdb" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941016 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="sbdb" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941132 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7dcf75-d3ac-48f3-a9bb-66b37aa9be96" containerName="registry" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941143 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941151 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941160 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941171 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="kube-rbac-proxy-node" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941179 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941190 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="northd" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941201 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovn-acl-logging" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941212 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovn-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941220 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="nbdb" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941228 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="sbdb" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.941331 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941339 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: E0120 09:27:42.941346 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941351 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941498 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.941512 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerName="ovnkube-controller" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.943181 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:42 crc kubenswrapper[4859]: I0120 09:27:42.959174 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cfe04730-660d-4e59-8b5e-15e94d72990f" (UID: "cfe04730-660d-4e59-8b5e-15e94d72990f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034215 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-node-log\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034282 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-etc-openvswitch\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034353 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-systemd-units\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034384 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-cni-bin\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034494 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-run-systemd\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034523 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-run-ovn\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034541 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6w9m\" (UniqueName: \"kubernetes.io/projected/66a06d4d-105a-4068-98bf-bccb64186d67-kube-api-access-t6w9m\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034561 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-var-lib-openvswitch\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034579 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/66a06d4d-105a-4068-98bf-bccb64186d67-ovnkube-script-lib\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034596 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034613 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/66a06d4d-105a-4068-98bf-bccb64186d67-ovn-node-metrics-cert\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034668 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-run-netns\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034706 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-kubelet\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034742 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-log-socket\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034773 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034842 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/66a06d4d-105a-4068-98bf-bccb64186d67-env-overrides\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034865 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-slash\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034879 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-run-openvswitch\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034899 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-cni-netd\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.034919 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/66a06d4d-105a-4068-98bf-bccb64186d67-ovnkube-config\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.035106 4859 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfe04730-660d-4e59-8b5e-15e94d72990f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.035149 4859 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.035179 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92hv4\" (UniqueName: \"kubernetes.io/projected/cfe04730-660d-4e59-8b5e-15e94d72990f-kube-api-access-92hv4\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.035208 4859 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfe04730-660d-4e59-8b5e-15e94d72990f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.035230 4859 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.035248 4859 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfe04730-660d-4e59-8b5e-15e94d72990f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.136644 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-etc-openvswitch\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.136735 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-systemd-units\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.136765 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-cni-bin\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.136776 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-etc-openvswitch\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.136857 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-run-systemd\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.136866 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-systemd-units\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.136930 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-run-ovn\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.136963 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-cni-bin\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.136963 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-run-systemd\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.136890 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-run-ovn\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137331 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6w9m\" (UniqueName: \"kubernetes.io/projected/66a06d4d-105a-4068-98bf-bccb64186d67-kube-api-access-t6w9m\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137367 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-var-lib-openvswitch\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137394 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/66a06d4d-105a-4068-98bf-bccb64186d67-ovnkube-script-lib\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137413 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137430 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/66a06d4d-105a-4068-98bf-bccb64186d67-ovn-node-metrics-cert\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137465 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-run-netns\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137483 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-kubelet\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137512 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-log-socket\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137535 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137555 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/66a06d4d-105a-4068-98bf-bccb64186d67-env-overrides\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137593 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-slash\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137611 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-run-openvswitch\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137647 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-cni-netd\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137672 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/66a06d4d-105a-4068-98bf-bccb64186d67-ovnkube-config\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137739 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-node-log\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.137844 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-node-log\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.138213 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-var-lib-openvswitch\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.138249 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.138319 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-log-socket\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.138377 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.138420 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-run-openvswitch\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.138456 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-cni-netd\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.138347 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-run-netns\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.138535 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-slash\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.138571 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/66a06d4d-105a-4068-98bf-bccb64186d67-host-kubelet\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.139356 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/66a06d4d-105a-4068-98bf-bccb64186d67-ovnkube-config\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.139546 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/66a06d4d-105a-4068-98bf-bccb64186d67-ovnkube-script-lib\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.141128 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/66a06d4d-105a-4068-98bf-bccb64186d67-env-overrides\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.143229 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/66a06d4d-105a-4068-98bf-bccb64186d67-ovn-node-metrics-cert\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.154196 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6w9m\" (UniqueName: \"kubernetes.io/projected/66a06d4d-105a-4068-98bf-bccb64186d67-kube-api-access-t6w9m\") pod \"ovnkube-node-mrpd2\" (UID: \"66a06d4d-105a-4068-98bf-bccb64186d67\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.189006 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xqq7l_81947dc9-599a-4d35-a9c5-2684294a3afb/kube-multus/2.log" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.189519 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xqq7l_81947dc9-599a-4d35-a9c5-2684294a3afb/kube-multus/1.log" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.189570 4859 generic.go:334] "Generic (PLEG): container finished" podID="81947dc9-599a-4d35-a9c5-2684294a3afb" containerID="7e3aeeb7da6e924263fce786c7d33b083af999b01d213be14aedd92bcff9c96f" exitCode=2 Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.189649 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xqq7l" event={"ID":"81947dc9-599a-4d35-a9c5-2684294a3afb","Type":"ContainerDied","Data":"7e3aeeb7da6e924263fce786c7d33b083af999b01d213be14aedd92bcff9c96f"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.189694 4859 scope.go:117] "RemoveContainer" containerID="b95c6cf35c50a75fb1a2a1d40e697dab08aa04ae71fdaf770d8b2ebcb3ae6499" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.190226 4859 scope.go:117] "RemoveContainer" containerID="7e3aeeb7da6e924263fce786c7d33b083af999b01d213be14aedd92bcff9c96f" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.190401 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xqq7l_openshift-multus(81947dc9-599a-4d35-a9c5-2684294a3afb)\"" pod="openshift-multus/multus-xqq7l" podUID="81947dc9-599a-4d35-a9c5-2684294a3afb" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.194031 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovnkube-controller/3.log" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.196669 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovn-acl-logging/0.log" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197371 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhpfn_cfe04730-660d-4e59-8b5e-15e94d72990f/ovn-controller/0.log" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197706 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289" exitCode=0 Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197736 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e" exitCode=0 Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197745 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba" exitCode=0 Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197753 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b" exitCode=0 Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197760 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7" exitCode=0 Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197770 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52" exitCode=0 Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197793 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee" exitCode=143 Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197801 4859 generic.go:334] "Generic (PLEG): container finished" podID="cfe04730-660d-4e59-8b5e-15e94d72990f" containerID="9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee" exitCode=143 Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197817 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197903 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197921 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197938 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197953 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197966 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197980 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.197997 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198005 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198012 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198020 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198026 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198033 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198040 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198047 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198054 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198059 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198066 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198240 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198250 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198258 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198265 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198273 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198280 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198287 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198295 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198302 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198309 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198319 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198331 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198338 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198345 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198352 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198359 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198365 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198372 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198378 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198385 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198393 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198403 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhpfn" event={"ID":"cfe04730-660d-4e59-8b5e-15e94d72990f","Type":"ContainerDied","Data":"4203edfc5c6de74ffae807f09750455273a2e4f2f68d3bb1f4e779759a0a9e58"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198416 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198425 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198433 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198440 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198446 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198453 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198459 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198467 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198474 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.198482 4859 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22"} Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.228830 4859 scope.go:117] "RemoveContainer" containerID="70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.259293 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rhpfn"] Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.262577 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.263009 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.266447 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rhpfn"] Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.313509 4859 scope.go:117] "RemoveContainer" containerID="8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.338926 4859 scope.go:117] "RemoveContainer" containerID="584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.355899 4859 scope.go:117] "RemoveContainer" containerID="f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.374656 4859 scope.go:117] "RemoveContainer" containerID="b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.387508 4859 scope.go:117] "RemoveContainer" containerID="a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.400723 4859 scope.go:117] "RemoveContainer" containerID="6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.473273 4859 scope.go:117] "RemoveContainer" containerID="9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.491354 4859 scope.go:117] "RemoveContainer" containerID="ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.509109 4859 scope.go:117] "RemoveContainer" containerID="70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.509590 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289\": container with ID starting with 70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289 not found: ID does not exist" containerID="70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.509640 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289"} err="failed to get container status \"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289\": rpc error: code = NotFound desc = could not find container \"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289\": container with ID starting with 70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.509675 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.509937 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\": container with ID starting with 0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964 not found: ID does not exist" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.509961 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964"} err="failed to get container status \"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\": rpc error: code = NotFound desc = could not find container \"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\": container with ID starting with 0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.509979 4859 scope.go:117] "RemoveContainer" containerID="8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.510415 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\": container with ID starting with 8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e not found: ID does not exist" containerID="8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.510436 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e"} err="failed to get container status \"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\": rpc error: code = NotFound desc = could not find container \"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\": container with ID starting with 8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.510448 4859 scope.go:117] "RemoveContainer" containerID="584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.510748 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\": container with ID starting with 584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba not found: ID does not exist" containerID="584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.510932 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba"} err="failed to get container status \"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\": rpc error: code = NotFound desc = could not find container \"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\": container with ID starting with 584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.511063 4859 scope.go:117] "RemoveContainer" containerID="f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.511423 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\": container with ID starting with f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b not found: ID does not exist" containerID="f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.511447 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b"} err="failed to get container status \"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\": rpc error: code = NotFound desc = could not find container \"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\": container with ID starting with f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.511461 4859 scope.go:117] "RemoveContainer" containerID="b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.511966 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\": container with ID starting with b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7 not found: ID does not exist" containerID="b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.512017 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7"} err="failed to get container status \"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\": rpc error: code = NotFound desc = could not find container \"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\": container with ID starting with b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.512051 4859 scope.go:117] "RemoveContainer" containerID="a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.512349 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\": container with ID starting with a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52 not found: ID does not exist" containerID="a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.512484 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52"} err="failed to get container status \"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\": rpc error: code = NotFound desc = could not find container \"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\": container with ID starting with a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.512594 4859 scope.go:117] "RemoveContainer" containerID="6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.512975 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\": container with ID starting with 6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee not found: ID does not exist" containerID="6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.513118 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee"} err="failed to get container status \"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\": rpc error: code = NotFound desc = could not find container \"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\": container with ID starting with 6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.513233 4859 scope.go:117] "RemoveContainer" containerID="9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.513560 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\": container with ID starting with 9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee not found: ID does not exist" containerID="9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.513586 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee"} err="failed to get container status \"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\": rpc error: code = NotFound desc = could not find container \"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\": container with ID starting with 9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.513601 4859 scope.go:117] "RemoveContainer" containerID="ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22" Jan 20 09:27:43 crc kubenswrapper[4859]: E0120 09:27:43.513909 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\": container with ID starting with ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22 not found: ID does not exist" containerID="ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.514050 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22"} err="failed to get container status \"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\": rpc error: code = NotFound desc = could not find container \"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\": container with ID starting with ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.514170 4859 scope.go:117] "RemoveContainer" containerID="70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.514507 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289"} err="failed to get container status \"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289\": rpc error: code = NotFound desc = could not find container \"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289\": container with ID starting with 70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.514532 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.514748 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964"} err="failed to get container status \"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\": rpc error: code = NotFound desc = could not find container \"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\": container with ID starting with 0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.514772 4859 scope.go:117] "RemoveContainer" containerID="8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.515205 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e"} err="failed to get container status \"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\": rpc error: code = NotFound desc = could not find container \"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\": container with ID starting with 8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.515477 4859 scope.go:117] "RemoveContainer" containerID="584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.515963 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba"} err="failed to get container status \"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\": rpc error: code = NotFound desc = could not find container \"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\": container with ID starting with 584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.515986 4859 scope.go:117] "RemoveContainer" containerID="f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.516357 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b"} err="failed to get container status \"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\": rpc error: code = NotFound desc = could not find container \"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\": container with ID starting with f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.516376 4859 scope.go:117] "RemoveContainer" containerID="b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.517934 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7"} err="failed to get container status \"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\": rpc error: code = NotFound desc = could not find container \"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\": container with ID starting with b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.517961 4859 scope.go:117] "RemoveContainer" containerID="a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.518159 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52"} err="failed to get container status \"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\": rpc error: code = NotFound desc = could not find container \"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\": container with ID starting with a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.518184 4859 scope.go:117] "RemoveContainer" containerID="6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.518464 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee"} err="failed to get container status \"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\": rpc error: code = NotFound desc = could not find container \"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\": container with ID starting with 6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.518489 4859 scope.go:117] "RemoveContainer" containerID="9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.518671 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee"} err="failed to get container status \"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\": rpc error: code = NotFound desc = could not find container \"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\": container with ID starting with 9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.518692 4859 scope.go:117] "RemoveContainer" containerID="ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.518887 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22"} err="failed to get container status \"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\": rpc error: code = NotFound desc = could not find container \"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\": container with ID starting with ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.518909 4859 scope.go:117] "RemoveContainer" containerID="70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.519713 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289"} err="failed to get container status \"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289\": rpc error: code = NotFound desc = could not find container \"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289\": container with ID starting with 70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.519738 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.519976 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964"} err="failed to get container status \"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\": rpc error: code = NotFound desc = could not find container \"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\": container with ID starting with 0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.520000 4859 scope.go:117] "RemoveContainer" containerID="8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.520263 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e"} err="failed to get container status \"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\": rpc error: code = NotFound desc = could not find container \"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\": container with ID starting with 8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.520284 4859 scope.go:117] "RemoveContainer" containerID="584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.520465 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba"} err="failed to get container status \"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\": rpc error: code = NotFound desc = could not find container \"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\": container with ID starting with 584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.520490 4859 scope.go:117] "RemoveContainer" containerID="f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.520709 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b"} err="failed to get container status \"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\": rpc error: code = NotFound desc = could not find container \"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\": container with ID starting with f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.520730 4859 scope.go:117] "RemoveContainer" containerID="b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.521018 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7"} err="failed to get container status \"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\": rpc error: code = NotFound desc = could not find container \"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\": container with ID starting with b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.521041 4859 scope.go:117] "RemoveContainer" containerID="a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.521315 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52"} err="failed to get container status \"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\": rpc error: code = NotFound desc = could not find container \"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\": container with ID starting with a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.521338 4859 scope.go:117] "RemoveContainer" containerID="6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.521591 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee"} err="failed to get container status \"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\": rpc error: code = NotFound desc = could not find container \"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\": container with ID starting with 6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.521613 4859 scope.go:117] "RemoveContainer" containerID="9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.521825 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee"} err="failed to get container status \"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\": rpc error: code = NotFound desc = could not find container \"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\": container with ID starting with 9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.521847 4859 scope.go:117] "RemoveContainer" containerID="ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.522078 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22"} err="failed to get container status \"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\": rpc error: code = NotFound desc = could not find container \"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\": container with ID starting with ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.522094 4859 scope.go:117] "RemoveContainer" containerID="70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.522296 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289"} err="failed to get container status \"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289\": rpc error: code = NotFound desc = could not find container \"70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289\": container with ID starting with 70a3a51df990d9a8727ca202807610d0930ab1c3fbd6f21edc402ee53e1e1289 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.522313 4859 scope.go:117] "RemoveContainer" containerID="0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.522624 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964"} err="failed to get container status \"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\": rpc error: code = NotFound desc = could not find container \"0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964\": container with ID starting with 0753ca99ac34ddfc6b93467a694e2188cafaffcc12b2a5dcf4196e8d345bf964 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.522646 4859 scope.go:117] "RemoveContainer" containerID="8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.522884 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e"} err="failed to get container status \"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\": rpc error: code = NotFound desc = could not find container \"8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e\": container with ID starting with 8af5079f6586452296d1c63b4923c9f784084e57df9cc5b2012a938c0e4c998e not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.522901 4859 scope.go:117] "RemoveContainer" containerID="584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.523077 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba"} err="failed to get container status \"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\": rpc error: code = NotFound desc = could not find container \"584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba\": container with ID starting with 584350b97f11eabc8d74384742f68c9f44a786ec3d1494596a6d0e79dde05bba not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.523098 4859 scope.go:117] "RemoveContainer" containerID="f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.523334 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b"} err="failed to get container status \"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\": rpc error: code = NotFound desc = could not find container \"f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b\": container with ID starting with f251653fcc578de83e354e8d1493d2890e15db81db6e62bc355c974ad064463b not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.523355 4859 scope.go:117] "RemoveContainer" containerID="b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.523621 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7"} err="failed to get container status \"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\": rpc error: code = NotFound desc = could not find container \"b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7\": container with ID starting with b42d2f2b537353a84ce943df5d538b64392cf1e3addccf51a9c0f773c3796fa7 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.523642 4859 scope.go:117] "RemoveContainer" containerID="a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.523936 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52"} err="failed to get container status \"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\": rpc error: code = NotFound desc = could not find container \"a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52\": container with ID starting with a14103d5de291eb05fc7b21cef117e5e276aec4061b9cd6d811fff0ab608be52 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.523970 4859 scope.go:117] "RemoveContainer" containerID="6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.524168 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee"} err="failed to get container status \"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\": rpc error: code = NotFound desc = could not find container \"6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee\": container with ID starting with 6d8403d446673f4288697a2b4831de3bf422fce1ccc1ec9ada85ac1432ffbfee not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.524187 4859 scope.go:117] "RemoveContainer" containerID="9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.524372 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee"} err="failed to get container status \"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\": rpc error: code = NotFound desc = could not find container \"9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee\": container with ID starting with 9f304cff00abc8192168c105f2860a40c6ad6484d948b3c3dced22b108937eee not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.524387 4859 scope.go:117] "RemoveContainer" containerID="ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.524557 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22"} err="failed to get container status \"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\": rpc error: code = NotFound desc = could not find container \"ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22\": container with ID starting with ac69fd57e09f282974240675e9637fa3bdcd1e08b8f3d73366153750530f3f22 not found: ID does not exist" Jan 20 09:27:43 crc kubenswrapper[4859]: I0120 09:27:43.583299 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfe04730-660d-4e59-8b5e-15e94d72990f" path="/var/lib/kubelet/pods/cfe04730-660d-4e59-8b5e-15e94d72990f/volumes" Jan 20 09:27:44 crc kubenswrapper[4859]: I0120 09:27:44.207913 4859 generic.go:334] "Generic (PLEG): container finished" podID="66a06d4d-105a-4068-98bf-bccb64186d67" containerID="f3e5a3cf3e0bd72d3afb12142aa6be5c8e21c54eb4c65440629b50b125044a36" exitCode=0 Jan 20 09:27:44 crc kubenswrapper[4859]: I0120 09:27:44.208262 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" event={"ID":"66a06d4d-105a-4068-98bf-bccb64186d67","Type":"ContainerDied","Data":"f3e5a3cf3e0bd72d3afb12142aa6be5c8e21c54eb4c65440629b50b125044a36"} Jan 20 09:27:44 crc kubenswrapper[4859]: I0120 09:27:44.208311 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" event={"ID":"66a06d4d-105a-4068-98bf-bccb64186d67","Type":"ContainerStarted","Data":"b2520d6a8132ab1c5158e32e2c40abdb52d71fe8e1644068c48c2d6f258a7390"} Jan 20 09:27:44 crc kubenswrapper[4859]: I0120 09:27:44.216088 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xqq7l_81947dc9-599a-4d35-a9c5-2684294a3afb/kube-multus/2.log" Jan 20 09:27:45 crc kubenswrapper[4859]: I0120 09:27:45.232428 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" event={"ID":"66a06d4d-105a-4068-98bf-bccb64186d67","Type":"ContainerStarted","Data":"a7ba80e933f7dcb750c2f37f0a1cc2a8da6d728738feba0826e35ccf0422b431"} Jan 20 09:27:45 crc kubenswrapper[4859]: I0120 09:27:45.232888 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" event={"ID":"66a06d4d-105a-4068-98bf-bccb64186d67","Type":"ContainerStarted","Data":"890dae93870f0e4e6f6c7a8b47fed88e2679c92c4c18e684b9a8452947c62cab"} Jan 20 09:27:45 crc kubenswrapper[4859]: I0120 09:27:45.232910 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" event={"ID":"66a06d4d-105a-4068-98bf-bccb64186d67","Type":"ContainerStarted","Data":"d197ec6332f4032da31307888f019e6cd48f8e2b8d12a531475d038258428702"} Jan 20 09:27:45 crc kubenswrapper[4859]: I0120 09:27:45.232926 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" event={"ID":"66a06d4d-105a-4068-98bf-bccb64186d67","Type":"ContainerStarted","Data":"cf1101c005d782d3002a82cf5690fae1b8ab5873fa0673fa11222f4238d37007"} Jan 20 09:27:45 crc kubenswrapper[4859]: I0120 09:27:45.232942 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" event={"ID":"66a06d4d-105a-4068-98bf-bccb64186d67","Type":"ContainerStarted","Data":"ff6bf8b080fcdb28a8f38c43ce9ffd621bf4b1276c83f2fa85e400ed9d19d374"} Jan 20 09:27:46 crc kubenswrapper[4859]: I0120 09:27:46.245940 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" event={"ID":"66a06d4d-105a-4068-98bf-bccb64186d67","Type":"ContainerStarted","Data":"5a6befe1839a15101c78efabd2fd6564d0a7cc259741f4962bf5855a706d24ff"} Jan 20 09:27:48 crc kubenswrapper[4859]: I0120 09:27:48.268638 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" event={"ID":"66a06d4d-105a-4068-98bf-bccb64186d67","Type":"ContainerStarted","Data":"1fa21dccc7d991fd175151b375a2765fdac8df392d366db4072331b2326bc9e0"} Jan 20 09:27:50 crc kubenswrapper[4859]: I0120 09:27:50.285153 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" event={"ID":"66a06d4d-105a-4068-98bf-bccb64186d67","Type":"ContainerStarted","Data":"587dade8cdedbcc56c5695e625a05fc268a9413a28773aaf48a788e5ca966422"} Jan 20 09:27:50 crc kubenswrapper[4859]: I0120 09:27:50.285674 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:50 crc kubenswrapper[4859]: I0120 09:27:50.327301 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:50 crc kubenswrapper[4859]: I0120 09:27:50.328695 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" podStartSLOduration=8.328676352 podStartE2EDuration="8.328676352s" podCreationTimestamp="2026-01-20 09:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:27:50.325613547 +0000 UTC m=+545.081629773" watchObservedRunningTime="2026-01-20 09:27:50.328676352 +0000 UTC m=+545.084692538" Jan 20 09:27:51 crc kubenswrapper[4859]: I0120 09:27:51.291318 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:51 crc kubenswrapper[4859]: I0120 09:27:51.291388 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:51 crc kubenswrapper[4859]: I0120 09:27:51.363661 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:27:54 crc kubenswrapper[4859]: I0120 09:27:54.574233 4859 scope.go:117] "RemoveContainer" containerID="7e3aeeb7da6e924263fce786c7d33b083af999b01d213be14aedd92bcff9c96f" Jan 20 09:27:54 crc kubenswrapper[4859]: E0120 09:27:54.575142 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xqq7l_openshift-multus(81947dc9-599a-4d35-a9c5-2684294a3afb)\"" pod="openshift-multus/multus-xqq7l" podUID="81947dc9-599a-4d35-a9c5-2684294a3afb" Jan 20 09:28:05 crc kubenswrapper[4859]: I0120 09:28:05.579777 4859 scope.go:117] "RemoveContainer" containerID="7e3aeeb7da6e924263fce786c7d33b083af999b01d213be14aedd92bcff9c96f" Jan 20 09:28:06 crc kubenswrapper[4859]: I0120 09:28:06.402487 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xqq7l_81947dc9-599a-4d35-a9c5-2684294a3afb/kube-multus/2.log" Jan 20 09:28:06 crc kubenswrapper[4859]: I0120 09:28:06.402991 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xqq7l" event={"ID":"81947dc9-599a-4d35-a9c5-2684294a3afb","Type":"ContainerStarted","Data":"60f5a223eee02670853607e7e5d867e92c7d1115e571bcdd91d2462312fc9606"} Jan 20 09:28:10 crc kubenswrapper[4859]: I0120 09:28:10.049043 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:28:10 crc kubenswrapper[4859]: I0120 09:28:10.049404 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:28:10 crc kubenswrapper[4859]: I0120 09:28:10.049476 4859 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:28:10 crc kubenswrapper[4859]: I0120 09:28:10.050395 4859 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4f1b1333bee42b774a64c7b97e32fd3c79bdf7bb73ec034abbbed56ca3dcb79"} pod="openshift-machine-config-operator/machine-config-daemon-knvgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:28:10 crc kubenswrapper[4859]: I0120 09:28:10.050516 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" containerID="cri-o://c4f1b1333bee42b774a64c7b97e32fd3c79bdf7bb73ec034abbbed56ca3dcb79" gracePeriod=600 Jan 20 09:28:10 crc kubenswrapper[4859]: I0120 09:28:10.436285 4859 generic.go:334] "Generic (PLEG): container finished" podID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerID="c4f1b1333bee42b774a64c7b97e32fd3c79bdf7bb73ec034abbbed56ca3dcb79" exitCode=0 Jan 20 09:28:10 crc kubenswrapper[4859]: I0120 09:28:10.436370 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerDied","Data":"c4f1b1333bee42b774a64c7b97e32fd3c79bdf7bb73ec034abbbed56ca3dcb79"} Jan 20 09:28:10 crc kubenswrapper[4859]: I0120 09:28:10.436425 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerStarted","Data":"2d0d88de4d76d095e717392508318300a2f04b7c83aa49cc8a2f84ff71267e9f"} Jan 20 09:28:10 crc kubenswrapper[4859]: I0120 09:28:10.436461 4859 scope.go:117] "RemoveContainer" containerID="1f87b07074dc2de3606419bc083b72c6305e6f872bdad3c5497ae793d5e33db4" Jan 20 09:28:13 crc kubenswrapper[4859]: I0120 09:28:13.294249 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrpd2" Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.509328 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmf9d"] Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.510277 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vmf9d" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" containerName="registry-server" containerID="cri-o://de7fbc0c70fe62804b955b2b6a04811f08861844adcc36f1350ab18ac78315bf" gracePeriod=30 Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.710410 4859 generic.go:334] "Generic (PLEG): container finished" podID="9a598f90-74de-4b63-88c4-74fea20109ca" containerID="de7fbc0c70fe62804b955b2b6a04811f08861844adcc36f1350ab18ac78315bf" exitCode=0 Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.710591 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmf9d" event={"ID":"9a598f90-74de-4b63-88c4-74fea20109ca","Type":"ContainerDied","Data":"de7fbc0c70fe62804b955b2b6a04811f08861844adcc36f1350ab18ac78315bf"} Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.842237 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.941989 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdq7k\" (UniqueName: \"kubernetes.io/projected/9a598f90-74de-4b63-88c4-74fea20109ca-kube-api-access-qdq7k\") pod \"9a598f90-74de-4b63-88c4-74fea20109ca\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.942073 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-catalog-content\") pod \"9a598f90-74de-4b63-88c4-74fea20109ca\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.942156 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-utilities\") pod \"9a598f90-74de-4b63-88c4-74fea20109ca\" (UID: \"9a598f90-74de-4b63-88c4-74fea20109ca\") " Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.943056 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-utilities" (OuterVolumeSpecName: "utilities") pod "9a598f90-74de-4b63-88c4-74fea20109ca" (UID: "9a598f90-74de-4b63-88c4-74fea20109ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.951949 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a598f90-74de-4b63-88c4-74fea20109ca-kube-api-access-qdq7k" (OuterVolumeSpecName: "kube-api-access-qdq7k") pod "9a598f90-74de-4b63-88c4-74fea20109ca" (UID: "9a598f90-74de-4b63-88c4-74fea20109ca"). InnerVolumeSpecName "kube-api-access-qdq7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:28:52 crc kubenswrapper[4859]: I0120 09:28:52.967321 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a598f90-74de-4b63-88c4-74fea20109ca" (UID: "9a598f90-74de-4b63-88c4-74fea20109ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:28:53 crc kubenswrapper[4859]: I0120 09:28:53.043350 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:28:53 crc kubenswrapper[4859]: I0120 09:28:53.043391 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdq7k\" (UniqueName: \"kubernetes.io/projected/9a598f90-74de-4b63-88c4-74fea20109ca-kube-api-access-qdq7k\") on node \"crc\" DevicePath \"\"" Jan 20 09:28:53 crc kubenswrapper[4859]: I0120 09:28:53.043403 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a598f90-74de-4b63-88c4-74fea20109ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:28:53 crc kubenswrapper[4859]: I0120 09:28:53.717200 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmf9d" event={"ID":"9a598f90-74de-4b63-88c4-74fea20109ca","Type":"ContainerDied","Data":"e16ad2cdd742abb69da668044b97e06c423b1ec8fe23851c686d503548a00efa"} Jan 20 09:28:53 crc kubenswrapper[4859]: I0120 09:28:53.717261 4859 scope.go:117] "RemoveContainer" containerID="de7fbc0c70fe62804b955b2b6a04811f08861844adcc36f1350ab18ac78315bf" Jan 20 09:28:53 crc kubenswrapper[4859]: I0120 09:28:53.717377 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmf9d" Jan 20 09:28:53 crc kubenswrapper[4859]: I0120 09:28:53.737539 4859 scope.go:117] "RemoveContainer" containerID="5474015a3deb8a964359f7a8b24e0c21560f1124b01c7a5137c21023ac10a136" Jan 20 09:28:53 crc kubenswrapper[4859]: I0120 09:28:53.745379 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmf9d"] Jan 20 09:28:53 crc kubenswrapper[4859]: I0120 09:28:53.748886 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmf9d"] Jan 20 09:28:53 crc kubenswrapper[4859]: I0120 09:28:53.755434 4859 scope.go:117] "RemoveContainer" containerID="e046dbdf3fcdd7bc0e27240c51384d930270d1316dadc4c576bb94a77e830f04" Jan 20 09:28:55 crc kubenswrapper[4859]: I0120 09:28:55.583940 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" path="/var/lib/kubelet/pods/9a598f90-74de-4b63-88c4-74fea20109ca/volumes" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.281495 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd"] Jan 20 09:28:56 crc kubenswrapper[4859]: E0120 09:28:56.281748 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" containerName="registry-server" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.281765 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" containerName="registry-server" Jan 20 09:28:56 crc kubenswrapper[4859]: E0120 09:28:56.281799 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" containerName="extract-content" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.281806 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" containerName="extract-content" Jan 20 09:28:56 crc kubenswrapper[4859]: E0120 09:28:56.281819 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" containerName="extract-utilities" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.281825 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" containerName="extract-utilities" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.281926 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a598f90-74de-4b63-88c4-74fea20109ca" containerName="registry-server" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.282640 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.284623 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.299862 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd"] Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.387109 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ft2p\" (UniqueName: \"kubernetes.io/projected/e76aa968-4292-49a7-9839-f8b1771798dc-kube-api-access-4ft2p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.387166 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.387206 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.489083 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.489176 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.489260 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ft2p\" (UniqueName: \"kubernetes.io/projected/e76aa968-4292-49a7-9839-f8b1771798dc-kube-api-access-4ft2p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.490004 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.490066 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.514108 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ft2p\" (UniqueName: \"kubernetes.io/projected/e76aa968-4292-49a7-9839-f8b1771798dc-kube-api-access-4ft2p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.597885 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:28:56 crc kubenswrapper[4859]: I0120 09:28:56.851816 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd"] Jan 20 09:28:57 crc kubenswrapper[4859]: I0120 09:28:57.753122 4859 generic.go:334] "Generic (PLEG): container finished" podID="e76aa968-4292-49a7-9839-f8b1771798dc" containerID="73ef3f0d3e3031cb8b709268104a00de39af82b0acbe52bcd66281651da2cf1d" exitCode=0 Jan 20 09:28:57 crc kubenswrapper[4859]: I0120 09:28:57.753285 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" event={"ID":"e76aa968-4292-49a7-9839-f8b1771798dc","Type":"ContainerDied","Data":"73ef3f0d3e3031cb8b709268104a00de39af82b0acbe52bcd66281651da2cf1d"} Jan 20 09:28:57 crc kubenswrapper[4859]: I0120 09:28:57.753487 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" event={"ID":"e76aa968-4292-49a7-9839-f8b1771798dc","Type":"ContainerStarted","Data":"63f4e9848e02dd5e5c801f903c70c6d4b21b2c9fbea03e6197fae48cb6df4927"} Jan 20 09:28:57 crc kubenswrapper[4859]: I0120 09:28:57.755847 4859 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 09:28:59 crc kubenswrapper[4859]: I0120 09:28:59.768022 4859 generic.go:334] "Generic (PLEG): container finished" podID="e76aa968-4292-49a7-9839-f8b1771798dc" containerID="df7e419218b6600e51f762d0569fb1efced830f5fe35792ff567a727669bd54a" exitCode=0 Jan 20 09:28:59 crc kubenswrapper[4859]: I0120 09:28:59.768097 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" event={"ID":"e76aa968-4292-49a7-9839-f8b1771798dc","Type":"ContainerDied","Data":"df7e419218b6600e51f762d0569fb1efced830f5fe35792ff567a727669bd54a"} Jan 20 09:29:00 crc kubenswrapper[4859]: I0120 09:29:00.779036 4859 generic.go:334] "Generic (PLEG): container finished" podID="e76aa968-4292-49a7-9839-f8b1771798dc" containerID="0e2d4555622a00c077e99251bbfadb43fededb5fa7856e16549e0b4f2b80f515" exitCode=0 Jan 20 09:29:00 crc kubenswrapper[4859]: I0120 09:29:00.779255 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" event={"ID":"e76aa968-4292-49a7-9839-f8b1771798dc","Type":"ContainerDied","Data":"0e2d4555622a00c077e99251bbfadb43fededb5fa7856e16549e0b4f2b80f515"} Jan 20 09:29:01 crc kubenswrapper[4859]: I0120 09:29:01.988906 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.081317 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-bundle\") pod \"e76aa968-4292-49a7-9839-f8b1771798dc\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.081484 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-util\") pod \"e76aa968-4292-49a7-9839-f8b1771798dc\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.081564 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ft2p\" (UniqueName: \"kubernetes.io/projected/e76aa968-4292-49a7-9839-f8b1771798dc-kube-api-access-4ft2p\") pod \"e76aa968-4292-49a7-9839-f8b1771798dc\" (UID: \"e76aa968-4292-49a7-9839-f8b1771798dc\") " Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.085367 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-bundle" (OuterVolumeSpecName: "bundle") pod "e76aa968-4292-49a7-9839-f8b1771798dc" (UID: "e76aa968-4292-49a7-9839-f8b1771798dc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.089068 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e76aa968-4292-49a7-9839-f8b1771798dc-kube-api-access-4ft2p" (OuterVolumeSpecName: "kube-api-access-4ft2p") pod "e76aa968-4292-49a7-9839-f8b1771798dc" (UID: "e76aa968-4292-49a7-9839-f8b1771798dc"). InnerVolumeSpecName "kube-api-access-4ft2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.097262 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-util" (OuterVolumeSpecName: "util") pod "e76aa968-4292-49a7-9839-f8b1771798dc" (UID: "e76aa968-4292-49a7-9839-f8b1771798dc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.182730 4859 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.182824 4859 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e76aa968-4292-49a7-9839-f8b1771798dc-util\") on node \"crc\" DevicePath \"\"" Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.182840 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ft2p\" (UniqueName: \"kubernetes.io/projected/e76aa968-4292-49a7-9839-f8b1771798dc-kube-api-access-4ft2p\") on node \"crc\" DevicePath \"\"" Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.794729 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" event={"ID":"e76aa968-4292-49a7-9839-f8b1771798dc","Type":"ContainerDied","Data":"63f4e9848e02dd5e5c801f903c70c6d4b21b2c9fbea03e6197fae48cb6df4927"} Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.794836 4859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f4e9848e02dd5e5c801f903c70c6d4b21b2c9fbea03e6197fae48cb6df4927" Jan 20 09:29:02 crc kubenswrapper[4859]: I0120 09:29:02.794878 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.255158 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb"] Jan 20 09:29:03 crc kubenswrapper[4859]: E0120 09:29:03.255384 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76aa968-4292-49a7-9839-f8b1771798dc" containerName="pull" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.255397 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76aa968-4292-49a7-9839-f8b1771798dc" containerName="pull" Jan 20 09:29:03 crc kubenswrapper[4859]: E0120 09:29:03.255411 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76aa968-4292-49a7-9839-f8b1771798dc" containerName="util" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.255416 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76aa968-4292-49a7-9839-f8b1771798dc" containerName="util" Jan 20 09:29:03 crc kubenswrapper[4859]: E0120 09:29:03.255428 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76aa968-4292-49a7-9839-f8b1771798dc" containerName="extract" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.255434 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76aa968-4292-49a7-9839-f8b1771798dc" containerName="extract" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.255532 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76aa968-4292-49a7-9839-f8b1771798dc" containerName="extract" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.256418 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.259139 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.271391 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb"] Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.398691 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.398847 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.398909 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbbfs\" (UniqueName: \"kubernetes.io/projected/7ce40b55-7671-4f2f-8add-a1b79d87acdb-kube-api-access-pbbfs\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.499816 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.499936 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.500012 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbbfs\" (UniqueName: \"kubernetes.io/projected/7ce40b55-7671-4f2f-8add-a1b79d87acdb-kube-api-access-pbbfs\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.500497 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.500512 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.516577 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbbfs\" (UniqueName: \"kubernetes.io/projected/7ce40b55-7671-4f2f-8add-a1b79d87acdb-kube-api-access-pbbfs\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.571976 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.769511 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb"] Jan 20 09:29:03 crc kubenswrapper[4859]: W0120 09:29:03.775439 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ce40b55_7671_4f2f_8add_a1b79d87acdb.slice/crio-242287f92f0ce960c0971c888ec50a9cd7713e271f9a508dea0829d22b00fda4 WatchSource:0}: Error finding container 242287f92f0ce960c0971c888ec50a9cd7713e271f9a508dea0829d22b00fda4: Status 404 returned error can't find the container with id 242287f92f0ce960c0971c888ec50a9cd7713e271f9a508dea0829d22b00fda4 Jan 20 09:29:03 crc kubenswrapper[4859]: I0120 09:29:03.805176 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" event={"ID":"7ce40b55-7671-4f2f-8add-a1b79d87acdb","Type":"ContainerStarted","Data":"242287f92f0ce960c0971c888ec50a9cd7713e271f9a508dea0829d22b00fda4"} Jan 20 09:29:04 crc kubenswrapper[4859]: I0120 09:29:04.814167 4859 generic.go:334] "Generic (PLEG): container finished" podID="7ce40b55-7671-4f2f-8add-a1b79d87acdb" containerID="43046b06bb9b2e6900e2aeea76b33eedf6691c906c1ba31ce4581db9439a5e6a" exitCode=0 Jan 20 09:29:04 crc kubenswrapper[4859]: I0120 09:29:04.814260 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" event={"ID":"7ce40b55-7671-4f2f-8add-a1b79d87acdb","Type":"ContainerDied","Data":"43046b06bb9b2e6900e2aeea76b33eedf6691c906c1ba31ce4581db9439a5e6a"} Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.516918 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp"] Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.518676 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.528344 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp"] Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.657799 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.657874 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.657952 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6lx\" (UniqueName: \"kubernetes.io/projected/da104415-69df-456f-9d81-1e514fc3249f-kube-api-access-7s6lx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.759482 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.759540 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.759560 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s6lx\" (UniqueName: \"kubernetes.io/projected/da104415-69df-456f-9d81-1e514fc3249f-kube-api-access-7s6lx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.760046 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.760101 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.797193 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s6lx\" (UniqueName: \"kubernetes.io/projected/da104415-69df-456f-9d81-1e514fc3249f-kube-api-access-7s6lx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.834631 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.852392 4859 generic.go:334] "Generic (PLEG): container finished" podID="7ce40b55-7671-4f2f-8add-a1b79d87acdb" containerID="0faa7e485592e0cdd6c7aa54eff7008d76625a5ebf4e1ff2a978ac9521c5b9d9" exitCode=0 Jan 20 09:29:07 crc kubenswrapper[4859]: I0120 09:29:07.852455 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" event={"ID":"7ce40b55-7671-4f2f-8add-a1b79d87acdb","Type":"ContainerDied","Data":"0faa7e485592e0cdd6c7aa54eff7008d76625a5ebf4e1ff2a978ac9521c5b9d9"} Jan 20 09:29:08 crc kubenswrapper[4859]: W0120 09:29:08.370544 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda104415_69df_456f_9d81_1e514fc3249f.slice/crio-9223e09f2a78efe67fcd393b8245e5164afda3ba50030f18bb0211550b3a00b6 WatchSource:0}: Error finding container 9223e09f2a78efe67fcd393b8245e5164afda3ba50030f18bb0211550b3a00b6: Status 404 returned error can't find the container with id 9223e09f2a78efe67fcd393b8245e5164afda3ba50030f18bb0211550b3a00b6 Jan 20 09:29:08 crc kubenswrapper[4859]: I0120 09:29:08.372750 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp"] Jan 20 09:29:08 crc kubenswrapper[4859]: I0120 09:29:08.858168 4859 generic.go:334] "Generic (PLEG): container finished" podID="da104415-69df-456f-9d81-1e514fc3249f" containerID="08ded64c51d14f4ae1d01de0a7ef4c7fa142928a6441dc19679014dfc2a935ff" exitCode=0 Jan 20 09:29:08 crc kubenswrapper[4859]: I0120 09:29:08.858235 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" event={"ID":"da104415-69df-456f-9d81-1e514fc3249f","Type":"ContainerDied","Data":"08ded64c51d14f4ae1d01de0a7ef4c7fa142928a6441dc19679014dfc2a935ff"} Jan 20 09:29:08 crc kubenswrapper[4859]: I0120 09:29:08.858440 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" event={"ID":"da104415-69df-456f-9d81-1e514fc3249f","Type":"ContainerStarted","Data":"9223e09f2a78efe67fcd393b8245e5164afda3ba50030f18bb0211550b3a00b6"} Jan 20 09:29:08 crc kubenswrapper[4859]: I0120 09:29:08.862546 4859 generic.go:334] "Generic (PLEG): container finished" podID="7ce40b55-7671-4f2f-8add-a1b79d87acdb" containerID="aa9a3e9a3b2487c4200611d458be7a8bb86657b43a4964f7f173d5a39033ae05" exitCode=0 Jan 20 09:29:08 crc kubenswrapper[4859]: I0120 09:29:08.862600 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" event={"ID":"7ce40b55-7671-4f2f-8add-a1b79d87acdb","Type":"ContainerDied","Data":"aa9a3e9a3b2487c4200611d458be7a8bb86657b43a4964f7f173d5a39033ae05"} Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.218229 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.296753 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-util\") pod \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.296876 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-bundle\") pod \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.297057 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbbfs\" (UniqueName: \"kubernetes.io/projected/7ce40b55-7671-4f2f-8add-a1b79d87acdb-kube-api-access-pbbfs\") pod \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\" (UID: \"7ce40b55-7671-4f2f-8add-a1b79d87acdb\") " Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.297719 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-bundle" (OuterVolumeSpecName: "bundle") pod "7ce40b55-7671-4f2f-8add-a1b79d87acdb" (UID: "7ce40b55-7671-4f2f-8add-a1b79d87acdb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.302973 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce40b55-7671-4f2f-8add-a1b79d87acdb-kube-api-access-pbbfs" (OuterVolumeSpecName: "kube-api-access-pbbfs") pod "7ce40b55-7671-4f2f-8add-a1b79d87acdb" (UID: "7ce40b55-7671-4f2f-8add-a1b79d87acdb"). InnerVolumeSpecName "kube-api-access-pbbfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.311065 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-util" (OuterVolumeSpecName: "util") pod "7ce40b55-7671-4f2f-8add-a1b79d87acdb" (UID: "7ce40b55-7671-4f2f-8add-a1b79d87acdb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.399268 4859 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-util\") on node \"crc\" DevicePath \"\"" Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.399325 4859 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ce40b55-7671-4f2f-8add-a1b79d87acdb-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.399339 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbbfs\" (UniqueName: \"kubernetes.io/projected/7ce40b55-7671-4f2f-8add-a1b79d87acdb-kube-api-access-pbbfs\") on node \"crc\" DevicePath \"\"" Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.878041 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" event={"ID":"7ce40b55-7671-4f2f-8add-a1b79d87acdb","Type":"ContainerDied","Data":"242287f92f0ce960c0971c888ec50a9cd7713e271f9a508dea0829d22b00fda4"} Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.878086 4859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="242287f92f0ce960c0971c888ec50a9cd7713e271f9a508dea0829d22b00fda4" Jan 20 09:29:10 crc kubenswrapper[4859]: I0120 09:29:10.878160 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.789126 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d"] Jan 20 09:29:12 crc kubenswrapper[4859]: E0120 09:29:12.789815 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce40b55-7671-4f2f-8add-a1b79d87acdb" containerName="extract" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.789832 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce40b55-7671-4f2f-8add-a1b79d87acdb" containerName="extract" Jan 20 09:29:12 crc kubenswrapper[4859]: E0120 09:29:12.789866 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce40b55-7671-4f2f-8add-a1b79d87acdb" containerName="pull" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.789877 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce40b55-7671-4f2f-8add-a1b79d87acdb" containerName="pull" Jan 20 09:29:12 crc kubenswrapper[4859]: E0120 09:29:12.789890 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce40b55-7671-4f2f-8add-a1b79d87acdb" containerName="util" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.789899 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce40b55-7671-4f2f-8add-a1b79d87acdb" containerName="util" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.790023 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce40b55-7671-4f2f-8add-a1b79d87acdb" containerName="extract" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.790513 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.792395 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.792650 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.793263 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-lvzgj" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.803765 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d"] Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.938723 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdwjp\" (UniqueName: \"kubernetes.io/projected/5fd94288-36b5-45a6-8b54-f91cf71c4db8-kube-api-access-vdwjp\") pod \"obo-prometheus-operator-68bc856cb9-22h5d\" (UID: \"5fd94288-36b5-45a6-8b54-f91cf71c4db8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.960349 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8"] Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.961122 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.962812 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.963087 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-vfp6w" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.968522 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf"] Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.969425 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" Jan 20 09:29:12 crc kubenswrapper[4859]: I0120 09:29:12.977092 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8"] Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.019971 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf"] Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.039801 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cbde904-658d-4fc7-9705-692dc47c50c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8\" (UID: \"9cbde904-658d-4fc7-9705-692dc47c50c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.039851 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b98dcaa-fc77-41c5-afeb-457e039e9818-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf\" (UID: \"8b98dcaa-fc77-41c5-afeb-457e039e9818\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.039895 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cbde904-658d-4fc7-9705-692dc47c50c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8\" (UID: \"9cbde904-658d-4fc7-9705-692dc47c50c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.039934 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdwjp\" (UniqueName: \"kubernetes.io/projected/5fd94288-36b5-45a6-8b54-f91cf71c4db8-kube-api-access-vdwjp\") pod \"obo-prometheus-operator-68bc856cb9-22h5d\" (UID: \"5fd94288-36b5-45a6-8b54-f91cf71c4db8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.039955 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b98dcaa-fc77-41c5-afeb-457e039e9818-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf\" (UID: \"8b98dcaa-fc77-41c5-afeb-457e039e9818\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.056700 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdwjp\" (UniqueName: \"kubernetes.io/projected/5fd94288-36b5-45a6-8b54-f91cf71c4db8-kube-api-access-vdwjp\") pod \"obo-prometheus-operator-68bc856cb9-22h5d\" (UID: \"5fd94288-36b5-45a6-8b54-f91cf71c4db8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.106423 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-ft4zn"] Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.107258 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.112273 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.112388 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-dljqc" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.112286 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.119630 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-ft4zn"] Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.141285 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cbde904-658d-4fc7-9705-692dc47c50c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8\" (UID: \"9cbde904-658d-4fc7-9705-692dc47c50c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.141334 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b98dcaa-fc77-41c5-afeb-457e039e9818-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf\" (UID: \"8b98dcaa-fc77-41c5-afeb-457e039e9818\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.141364 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cbde904-658d-4fc7-9705-692dc47c50c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8\" (UID: \"9cbde904-658d-4fc7-9705-692dc47c50c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.141403 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b98dcaa-fc77-41c5-afeb-457e039e9818-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf\" (UID: \"8b98dcaa-fc77-41c5-afeb-457e039e9818\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.145120 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b98dcaa-fc77-41c5-afeb-457e039e9818-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf\" (UID: \"8b98dcaa-fc77-41c5-afeb-457e039e9818\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.145247 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cbde904-658d-4fc7-9705-692dc47c50c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8\" (UID: \"9cbde904-658d-4fc7-9705-692dc47c50c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.145283 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b98dcaa-fc77-41c5-afeb-457e039e9818-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf\" (UID: \"8b98dcaa-fc77-41c5-afeb-457e039e9818\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.146139 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cbde904-658d-4fc7-9705-692dc47c50c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8\" (UID: \"9cbde904-658d-4fc7-9705-692dc47c50c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.226058 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6bhr5"] Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.227080 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.228951 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-trvvg" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.242298 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7698897d-ef4b-4fc1-a25a-634a2abae6c7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-ft4zn\" (UID: \"7698897d-ef4b-4fc1-a25a-634a2abae6c7\") " pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.242375 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfsz9\" (UniqueName: \"kubernetes.io/projected/7698897d-ef4b-4fc1-a25a-634a2abae6c7-kube-api-access-xfsz9\") pod \"observability-operator-59bdc8b94-ft4zn\" (UID: \"7698897d-ef4b-4fc1-a25a-634a2abae6c7\") " pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.250000 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6bhr5"] Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.280136 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.304343 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.344236 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfsz9\" (UniqueName: \"kubernetes.io/projected/7698897d-ef4b-4fc1-a25a-634a2abae6c7-kube-api-access-xfsz9\") pod \"observability-operator-59bdc8b94-ft4zn\" (UID: \"7698897d-ef4b-4fc1-a25a-634a2abae6c7\") " pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.344315 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3a24c-3b9b-4a53-9b19-3ff693862438-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6bhr5\" (UID: \"a4b3a24c-3b9b-4a53-9b19-3ff693862438\") " pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.344357 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44q5c\" (UniqueName: \"kubernetes.io/projected/a4b3a24c-3b9b-4a53-9b19-3ff693862438-kube-api-access-44q5c\") pod \"perses-operator-5bf474d74f-6bhr5\" (UID: \"a4b3a24c-3b9b-4a53-9b19-3ff693862438\") " pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.344412 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7698897d-ef4b-4fc1-a25a-634a2abae6c7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-ft4zn\" (UID: \"7698897d-ef4b-4fc1-a25a-634a2abae6c7\") " pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.348752 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7698897d-ef4b-4fc1-a25a-634a2abae6c7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-ft4zn\" (UID: \"7698897d-ef4b-4fc1-a25a-634a2abae6c7\") " pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.362203 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfsz9\" (UniqueName: \"kubernetes.io/projected/7698897d-ef4b-4fc1-a25a-634a2abae6c7-kube-api-access-xfsz9\") pod \"observability-operator-59bdc8b94-ft4zn\" (UID: \"7698897d-ef4b-4fc1-a25a-634a2abae6c7\") " pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.429981 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.445732 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3a24c-3b9b-4a53-9b19-3ff693862438-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6bhr5\" (UID: \"a4b3a24c-3b9b-4a53-9b19-3ff693862438\") " pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.445811 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44q5c\" (UniqueName: \"kubernetes.io/projected/a4b3a24c-3b9b-4a53-9b19-3ff693862438-kube-api-access-44q5c\") pod \"perses-operator-5bf474d74f-6bhr5\" (UID: \"a4b3a24c-3b9b-4a53-9b19-3ff693862438\") " pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.446905 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3a24c-3b9b-4a53-9b19-3ff693862438-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6bhr5\" (UID: \"a4b3a24c-3b9b-4a53-9b19-3ff693862438\") " pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.464632 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44q5c\" (UniqueName: \"kubernetes.io/projected/a4b3a24c-3b9b-4a53-9b19-3ff693862438-kube-api-access-44q5c\") pod \"perses-operator-5bf474d74f-6bhr5\" (UID: \"a4b3a24c-3b9b-4a53-9b19-3ff693862438\") " pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" Jan 20 09:29:13 crc kubenswrapper[4859]: I0120 09:29:13.544628 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.790743 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6bhr5"] Jan 20 09:29:16 crc kubenswrapper[4859]: W0120 09:29:16.809662 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b3a24c_3b9b_4a53_9b19_3ff693862438.slice/crio-deac3931b6cec35e740b79c1a8f78f2f9514bd0c7323ef14e15856b58ec79a38 WatchSource:0}: Error finding container deac3931b6cec35e740b79c1a8f78f2f9514bd0c7323ef14e15856b58ec79a38: Status 404 returned error can't find the container with id deac3931b6cec35e740b79c1a8f78f2f9514bd0c7323ef14e15856b58ec79a38 Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.856637 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d"] Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.861539 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-ft4zn"] Jan 20 09:29:16 crc kubenswrapper[4859]: W0120 09:29:16.861908 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cbde904_658d_4fc7_9705_692dc47c50c4.slice/crio-857d9bf3ce5d43850d55549053af5dfc89e694c1a0e18b565aac3d36d85e9fcf WatchSource:0}: Error finding container 857d9bf3ce5d43850d55549053af5dfc89e694c1a0e18b565aac3d36d85e9fcf: Status 404 returned error can't find the container with id 857d9bf3ce5d43850d55549053af5dfc89e694c1a0e18b565aac3d36d85e9fcf Jan 20 09:29:16 crc kubenswrapper[4859]: W0120 09:29:16.863322 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7698897d_ef4b_4fc1_a25a_634a2abae6c7.slice/crio-00b8853501041f058b00f3841910c67721df8e1c6825861ee0d2e9837a8ff1b2 WatchSource:0}: Error finding container 00b8853501041f058b00f3841910c67721df8e1c6825861ee0d2e9837a8ff1b2: Status 404 returned error can't find the container with id 00b8853501041f058b00f3841910c67721df8e1c6825861ee0d2e9837a8ff1b2 Jan 20 09:29:16 crc kubenswrapper[4859]: W0120 09:29:16.868508 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd94288_36b5_45a6_8b54_f91cf71c4db8.slice/crio-c1ae53277819b25e58de3f7b23d7abc9cc20ad3856a6727a66d150fea5bad6b3 WatchSource:0}: Error finding container c1ae53277819b25e58de3f7b23d7abc9cc20ad3856a6727a66d150fea5bad6b3: Status 404 returned error can't find the container with id c1ae53277819b25e58de3f7b23d7abc9cc20ad3856a6727a66d150fea5bad6b3 Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.868654 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8"] Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.952832 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" event={"ID":"a4b3a24c-3b9b-4a53-9b19-3ff693862438","Type":"ContainerStarted","Data":"deac3931b6cec35e740b79c1a8f78f2f9514bd0c7323ef14e15856b58ec79a38"} Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.953716 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" event={"ID":"7698897d-ef4b-4fc1-a25a-634a2abae6c7","Type":"ContainerStarted","Data":"00b8853501041f058b00f3841910c67721df8e1c6825861ee0d2e9837a8ff1b2"} Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.954616 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d" event={"ID":"5fd94288-36b5-45a6-8b54-f91cf71c4db8","Type":"ContainerStarted","Data":"c1ae53277819b25e58de3f7b23d7abc9cc20ad3856a6727a66d150fea5bad6b3"} Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.955849 4859 generic.go:334] "Generic (PLEG): container finished" podID="da104415-69df-456f-9d81-1e514fc3249f" containerID="5fc7c71a995d3416f5ba6aff4b5e16efe4c9aa577b82b6ca891becfc383a9e49" exitCode=0 Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.955890 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" event={"ID":"da104415-69df-456f-9d81-1e514fc3249f","Type":"ContainerDied","Data":"5fc7c71a995d3416f5ba6aff4b5e16efe4c9aa577b82b6ca891becfc383a9e49"} Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.958054 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" event={"ID":"9cbde904-658d-4fc7-9705-692dc47c50c4","Type":"ContainerStarted","Data":"857d9bf3ce5d43850d55549053af5dfc89e694c1a0e18b565aac3d36d85e9fcf"} Jan 20 09:29:16 crc kubenswrapper[4859]: I0120 09:29:16.967170 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf"] Jan 20 09:29:17 crc kubenswrapper[4859]: I0120 09:29:17.977352 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" event={"ID":"8b98dcaa-fc77-41c5-afeb-457e039e9818","Type":"ContainerStarted","Data":"56b163c0601873514266267a65a04d23ea11ec60df039e412a90e98c56f2081a"} Jan 20 09:29:17 crc kubenswrapper[4859]: I0120 09:29:17.981967 4859 generic.go:334] "Generic (PLEG): container finished" podID="da104415-69df-456f-9d81-1e514fc3249f" containerID="e5ee276b0b19fb44fd7f90b7a5be4437ce8d0f8ea8f1f4d5863a18d7414dd991" exitCode=0 Jan 20 09:29:17 crc kubenswrapper[4859]: I0120 09:29:17.982014 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" event={"ID":"da104415-69df-456f-9d81-1e514fc3249f","Type":"ContainerDied","Data":"e5ee276b0b19fb44fd7f90b7a5be4437ce8d0f8ea8f1f4d5863a18d7414dd991"} Jan 20 09:29:19 crc kubenswrapper[4859]: I0120 09:29:19.337119 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:19 crc kubenswrapper[4859]: I0120 09:29:19.352370 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-bundle\") pod \"da104415-69df-456f-9d81-1e514fc3249f\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " Jan 20 09:29:19 crc kubenswrapper[4859]: I0120 09:29:19.352494 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s6lx\" (UniqueName: \"kubernetes.io/projected/da104415-69df-456f-9d81-1e514fc3249f-kube-api-access-7s6lx\") pod \"da104415-69df-456f-9d81-1e514fc3249f\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " Jan 20 09:29:19 crc kubenswrapper[4859]: I0120 09:29:19.352568 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-util\") pod \"da104415-69df-456f-9d81-1e514fc3249f\" (UID: \"da104415-69df-456f-9d81-1e514fc3249f\") " Jan 20 09:29:19 crc kubenswrapper[4859]: I0120 09:29:19.354302 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-bundle" (OuterVolumeSpecName: "bundle") pod "da104415-69df-456f-9d81-1e514fc3249f" (UID: "da104415-69df-456f-9d81-1e514fc3249f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:29:19 crc kubenswrapper[4859]: I0120 09:29:19.359307 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da104415-69df-456f-9d81-1e514fc3249f-kube-api-access-7s6lx" (OuterVolumeSpecName: "kube-api-access-7s6lx") pod "da104415-69df-456f-9d81-1e514fc3249f" (UID: "da104415-69df-456f-9d81-1e514fc3249f"). InnerVolumeSpecName "kube-api-access-7s6lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:29:19 crc kubenswrapper[4859]: I0120 09:29:19.364848 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-util" (OuterVolumeSpecName: "util") pod "da104415-69df-456f-9d81-1e514fc3249f" (UID: "da104415-69df-456f-9d81-1e514fc3249f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:29:19 crc kubenswrapper[4859]: I0120 09:29:19.454829 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s6lx\" (UniqueName: \"kubernetes.io/projected/da104415-69df-456f-9d81-1e514fc3249f-kube-api-access-7s6lx\") on node \"crc\" DevicePath \"\"" Jan 20 09:29:19 crc kubenswrapper[4859]: I0120 09:29:19.454868 4859 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-util\") on node \"crc\" DevicePath \"\"" Jan 20 09:29:19 crc kubenswrapper[4859]: I0120 09:29:19.454882 4859 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da104415-69df-456f-9d81-1e514fc3249f-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.014679 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" event={"ID":"da104415-69df-456f-9d81-1e514fc3249f","Type":"ContainerDied","Data":"9223e09f2a78efe67fcd393b8245e5164afda3ba50030f18bb0211550b3a00b6"} Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.014965 4859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9223e09f2a78efe67fcd393b8245e5164afda3ba50030f18bb0211550b3a00b6" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.015034 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.664455 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-55d84759fb-bl7qs"] Jan 20 09:29:20 crc kubenswrapper[4859]: E0120 09:29:20.664746 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da104415-69df-456f-9d81-1e514fc3249f" containerName="util" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.664758 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="da104415-69df-456f-9d81-1e514fc3249f" containerName="util" Jan 20 09:29:20 crc kubenswrapper[4859]: E0120 09:29:20.664767 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da104415-69df-456f-9d81-1e514fc3249f" containerName="pull" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.664773 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="da104415-69df-456f-9d81-1e514fc3249f" containerName="pull" Jan 20 09:29:20 crc kubenswrapper[4859]: E0120 09:29:20.664799 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da104415-69df-456f-9d81-1e514fc3249f" containerName="extract" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.664805 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="da104415-69df-456f-9d81-1e514fc3249f" containerName="extract" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.664922 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="da104415-69df-456f-9d81-1e514fc3249f" containerName="extract" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.665353 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.671322 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.671350 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.671394 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-s4cx7" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.671675 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.693052 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-55d84759fb-bl7qs"] Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.773633 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpd52\" (UniqueName: \"kubernetes.io/projected/5da0604c-17d2-458b-9890-9be299bb6fff-kube-api-access-kpd52\") pod \"elastic-operator-55d84759fb-bl7qs\" (UID: \"5da0604c-17d2-458b-9890-9be299bb6fff\") " pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.773923 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5da0604c-17d2-458b-9890-9be299bb6fff-apiservice-cert\") pod \"elastic-operator-55d84759fb-bl7qs\" (UID: \"5da0604c-17d2-458b-9890-9be299bb6fff\") " pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.774073 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5da0604c-17d2-458b-9890-9be299bb6fff-webhook-cert\") pod \"elastic-operator-55d84759fb-bl7qs\" (UID: \"5da0604c-17d2-458b-9890-9be299bb6fff\") " pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.875424 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpd52\" (UniqueName: \"kubernetes.io/projected/5da0604c-17d2-458b-9890-9be299bb6fff-kube-api-access-kpd52\") pod \"elastic-operator-55d84759fb-bl7qs\" (UID: \"5da0604c-17d2-458b-9890-9be299bb6fff\") " pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.875485 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5da0604c-17d2-458b-9890-9be299bb6fff-apiservice-cert\") pod \"elastic-operator-55d84759fb-bl7qs\" (UID: \"5da0604c-17d2-458b-9890-9be299bb6fff\") " pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.875556 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5da0604c-17d2-458b-9890-9be299bb6fff-webhook-cert\") pod \"elastic-operator-55d84759fb-bl7qs\" (UID: \"5da0604c-17d2-458b-9890-9be299bb6fff\") " pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.888343 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5da0604c-17d2-458b-9890-9be299bb6fff-apiservice-cert\") pod \"elastic-operator-55d84759fb-bl7qs\" (UID: \"5da0604c-17d2-458b-9890-9be299bb6fff\") " pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.888382 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5da0604c-17d2-458b-9890-9be299bb6fff-webhook-cert\") pod \"elastic-operator-55d84759fb-bl7qs\" (UID: \"5da0604c-17d2-458b-9890-9be299bb6fff\") " pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.900452 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpd52\" (UniqueName: \"kubernetes.io/projected/5da0604c-17d2-458b-9890-9be299bb6fff-kube-api-access-kpd52\") pod \"elastic-operator-55d84759fb-bl7qs\" (UID: \"5da0604c-17d2-458b-9890-9be299bb6fff\") " pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:20 crc kubenswrapper[4859]: I0120 09:29:20.986114 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" Jan 20 09:29:26 crc kubenswrapper[4859]: I0120 09:29:26.675499 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-55d84759fb-bl7qs"] Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.056505 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" event={"ID":"9cbde904-658d-4fc7-9705-692dc47c50c4","Type":"ContainerStarted","Data":"c2f692ddcf5e310783e464698c68c02a57466445d3c7f3fc1bd5988ac0f283bb"} Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.058175 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" event={"ID":"a4b3a24c-3b9b-4a53-9b19-3ff693862438","Type":"ContainerStarted","Data":"c0db60453d01396e3abd33162e2546f148832f697a4278540263a864bb307c27"} Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.058292 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.059496 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" event={"ID":"5da0604c-17d2-458b-9890-9be299bb6fff","Type":"ContainerStarted","Data":"9b9ee65bbc94701c0a735a92d6090dc1f5459a6460b32b258205106c095a4f5c"} Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.061058 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" event={"ID":"7698897d-ef4b-4fc1-a25a-634a2abae6c7","Type":"ContainerStarted","Data":"b61c844b880778d587c46ae4ebdc1882eddf058aa17b5df8d2fcb1e88b044968"} Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.061270 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.062855 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" event={"ID":"8b98dcaa-fc77-41c5-afeb-457e039e9818","Type":"ContainerStarted","Data":"88096cd07f86b0d6c22b4e9828a8a446e4ef391e76ce825b612ec67139364966"} Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.064235 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d" event={"ID":"5fd94288-36b5-45a6-8b54-f91cf71c4db8","Type":"ContainerStarted","Data":"71e29c85db055db99c2e04f1eb2c528b78afc4420a55dc52336fdb855b34e3c7"} Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.066867 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.075338 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8" podStartSLOduration=5.48588936 podStartE2EDuration="15.075319734s" podCreationTimestamp="2026-01-20 09:29:12 +0000 UTC" firstStartedPulling="2026-01-20 09:29:16.864437512 +0000 UTC m=+631.620453688" lastFinishedPulling="2026-01-20 09:29:26.453867886 +0000 UTC m=+641.209884062" observedRunningTime="2026-01-20 09:29:27.074063124 +0000 UTC m=+641.830079300" watchObservedRunningTime="2026-01-20 09:29:27.075319734 +0000 UTC m=+641.831335910" Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.099539 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-22h5d" podStartSLOduration=5.505205956 podStartE2EDuration="15.099521359s" podCreationTimestamp="2026-01-20 09:29:12 +0000 UTC" firstStartedPulling="2026-01-20 09:29:16.872370603 +0000 UTC m=+631.628386779" lastFinishedPulling="2026-01-20 09:29:26.466685976 +0000 UTC m=+641.222702182" observedRunningTime="2026-01-20 09:29:27.094525078 +0000 UTC m=+641.850541254" watchObservedRunningTime="2026-01-20 09:29:27.099521359 +0000 UTC m=+641.855537535" Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.119266 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf" podStartSLOduration=5.599005254 podStartE2EDuration="15.119248706s" podCreationTimestamp="2026-01-20 09:29:12 +0000 UTC" firstStartedPulling="2026-01-20 09:29:16.978067447 +0000 UTC m=+631.734083623" lastFinishedPulling="2026-01-20 09:29:26.498310859 +0000 UTC m=+641.254327075" observedRunningTime="2026-01-20 09:29:27.118464216 +0000 UTC m=+641.874480392" watchObservedRunningTime="2026-01-20 09:29:27.119248706 +0000 UTC m=+641.875264882" Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.179056 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-ft4zn" podStartSLOduration=4.547379837 podStartE2EDuration="14.179036551s" podCreationTimestamp="2026-01-20 09:29:13 +0000 UTC" firstStartedPulling="2026-01-20 09:29:16.865553869 +0000 UTC m=+631.621570045" lastFinishedPulling="2026-01-20 09:29:26.497210543 +0000 UTC m=+641.253226759" observedRunningTime="2026-01-20 09:29:27.147944989 +0000 UTC m=+641.903961185" watchObservedRunningTime="2026-01-20 09:29:27.179036551 +0000 UTC m=+641.935052727" Jan 20 09:29:27 crc kubenswrapper[4859]: I0120 09:29:27.179171 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" podStartSLOduration=4.4942157 podStartE2EDuration="14.179166604s" podCreationTimestamp="2026-01-20 09:29:13 +0000 UTC" firstStartedPulling="2026-01-20 09:29:16.812242719 +0000 UTC m=+631.568258895" lastFinishedPulling="2026-01-20 09:29:26.497193623 +0000 UTC m=+641.253209799" observedRunningTime="2026-01-20 09:29:27.174278715 +0000 UTC m=+641.930294901" watchObservedRunningTime="2026-01-20 09:29:27.179166604 +0000 UTC m=+641.935182780" Jan 20 09:29:30 crc kubenswrapper[4859]: I0120 09:29:30.083058 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" event={"ID":"5da0604c-17d2-458b-9890-9be299bb6fff","Type":"ContainerStarted","Data":"93580031b68f813282b8b40d13189b954f66f6c7f02ea3008b7f737d6d5e7b11"} Jan 20 09:29:30 crc kubenswrapper[4859]: I0120 09:29:30.107474 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-55d84759fb-bl7qs" podStartSLOduration=6.885845917 podStartE2EDuration="10.107455791s" podCreationTimestamp="2026-01-20 09:29:20 +0000 UTC" firstStartedPulling="2026-01-20 09:29:26.690332431 +0000 UTC m=+641.446348597" lastFinishedPulling="2026-01-20 09:29:29.911942295 +0000 UTC m=+644.667958471" observedRunningTime="2026-01-20 09:29:30.101917247 +0000 UTC m=+644.857933423" watchObservedRunningTime="2026-01-20 09:29:30.107455791 +0000 UTC m=+644.863471977" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.151793 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.153042 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.155678 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.156117 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.156892 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.157325 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.157460 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.157487 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.157469 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.157764 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-r7mq8" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.160506 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.172674 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179423 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179480 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179521 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179567 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179597 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179650 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179676 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179704 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179747 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179769 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179840 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179864 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179892 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179917 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.179944 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280363 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280400 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280423 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280439 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280457 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280476 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280495 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280510 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280535 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280555 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280572 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280587 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280607 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280635 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.280652 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.281194 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.281255 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.281300 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.281334 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.281500 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.281852 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.283285 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.285684 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.286274 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.286475 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.287267 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.288007 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.288212 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.291379 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.291944 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/1d0077f1-7566-4d6b-8ce5-ba4354570e1a-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"1d0077f1-7566-4d6b-8ce5-ba4354570e1a\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.516649 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.546836 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-6bhr5" Jan 20 09:29:33 crc kubenswrapper[4859]: I0120 09:29:33.792398 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 09:29:34 crc kubenswrapper[4859]: I0120 09:29:34.106978 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1d0077f1-7566-4d6b-8ce5-ba4354570e1a","Type":"ContainerStarted","Data":"fe0949d38cf843261816d41fc35489c2d7e68e0fb7de62da7a37b75525a7b213"} Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.058912 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp"] Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.060085 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.063244 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.063581 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.063640 4859 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-jn8tf" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.092326 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp"] Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.216913 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5865734-9af3-451c-aa99-fe8ece162b63-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-tm5tp\" (UID: \"b5865734-9af3-451c-aa99-fe8ece162b63\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.217027 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v552\" (UniqueName: \"kubernetes.io/projected/b5865734-9af3-451c-aa99-fe8ece162b63-kube-api-access-9v552\") pod \"cert-manager-operator-controller-manager-5446d6888b-tm5tp\" (UID: \"b5865734-9af3-451c-aa99-fe8ece162b63\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.318343 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v552\" (UniqueName: \"kubernetes.io/projected/b5865734-9af3-451c-aa99-fe8ece162b63-kube-api-access-9v552\") pod \"cert-manager-operator-controller-manager-5446d6888b-tm5tp\" (UID: \"b5865734-9af3-451c-aa99-fe8ece162b63\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.318473 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5865734-9af3-451c-aa99-fe8ece162b63-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-tm5tp\" (UID: \"b5865734-9af3-451c-aa99-fe8ece162b63\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.319005 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5865734-9af3-451c-aa99-fe8ece162b63-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-tm5tp\" (UID: \"b5865734-9af3-451c-aa99-fe8ece162b63\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.350139 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v552\" (UniqueName: \"kubernetes.io/projected/b5865734-9af3-451c-aa99-fe8ece162b63-kube-api-access-9v552\") pod \"cert-manager-operator-controller-manager-5446d6888b-tm5tp\" (UID: \"b5865734-9af3-451c-aa99-fe8ece162b63\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.386699 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" Jan 20 09:29:36 crc kubenswrapper[4859]: I0120 09:29:36.660024 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp"] Jan 20 09:29:36 crc kubenswrapper[4859]: W0120 09:29:36.664976 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5865734_9af3_451c_aa99_fe8ece162b63.slice/crio-c1775c3b43ead2634639528106c3b808691c0c77ff97984ae39203e389e86cf1 WatchSource:0}: Error finding container c1775c3b43ead2634639528106c3b808691c0c77ff97984ae39203e389e86cf1: Status 404 returned error can't find the container with id c1775c3b43ead2634639528106c3b808691c0c77ff97984ae39203e389e86cf1 Jan 20 09:29:37 crc kubenswrapper[4859]: I0120 09:29:37.126603 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" event={"ID":"b5865734-9af3-451c-aa99-fe8ece162b63","Type":"ContainerStarted","Data":"c1775c3b43ead2634639528106c3b808691c0c77ff97984ae39203e389e86cf1"} Jan 20 09:29:55 crc kubenswrapper[4859]: E0120 09:29:55.808520 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911" Jan 20 09:29:55 crc kubenswrapper[4859]: E0120 09:29:55.809420 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-operator,Image:registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911,Command:[/usr/bin/cert-manager-operator],Args:[start --v=$(OPERATOR_LOG_LEVEL) --trusted-ca-configmap=$(TRUSTED_CA_CONFIGMAP_NAME) --cloud-credentials-secret=$(CLOUD_CREDENTIALS_SECRET_NAME) --unsupported-addon-features=$(UNSUPPORTED_ADDON_FEATURES)],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cert-manager-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_WEBHOOK,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CA_INJECTOR,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CONTROLLER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ACMESOLVER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-acmesolver-rhel9@sha256:ba937fc4b9eee31422914352c11a45b90754ba4fbe490ea45249b90afdc4e0a7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ISTIOCSR,Value:registry.redhat.io/cert-manager/cert-manager-istio-csr-rhel9@sha256:af1ac813b8ee414ef215936f05197bc498bccbd540f3e2a93cb522221ba112bc,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.18.3,ValueFrom:nil,},EnvVar{Name:ISTIOCSR_OPERAND_IMAGE_VERSION,Value:0.14.2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:1.18.0,ValueFrom:nil,},EnvVar{Name:OPERATOR_LOG_LEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:TRUSTED_CA_CONFIGMAP_NAME,Value:,ValueFrom:nil,},EnvVar{Name:CLOUD_CREDENTIALS_SECRET_NAME,Value:,ValueFrom:nil,},EnvVar{Name:UNSUPPORTED_ADDON_FEATURES,Value:,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cert-manager-operator.v1.18.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{33554432 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9v552,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-operator-controller-manager-5446d6888b-tm5tp_cert-manager-operator(b5865734-9af3-451c-aa99-fe8ece162b63): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 09:29:55 crc kubenswrapper[4859]: E0120 09:29:55.810606 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" podUID="b5865734-9af3-451c-aa99-fe8ece162b63" Jan 20 09:29:56 crc kubenswrapper[4859]: E0120 09:29:56.047178 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Jan 20 09:29:56 crc kubenswrapper[4859]: E0120 09:29:56.047409 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(1d0077f1-7566-4d6b-8ce5-ba4354570e1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 09:29:56 crc kubenswrapper[4859]: E0120 09:29:56.048629 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="1d0077f1-7566-4d6b-8ce5-ba4354570e1a" Jan 20 09:29:56 crc kubenswrapper[4859]: E0120 09:29:56.269683 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911\\\"\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" podUID="b5865734-9af3-451c-aa99-fe8ece162b63" Jan 20 09:29:56 crc kubenswrapper[4859]: E0120 09:29:56.269371 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="1d0077f1-7566-4d6b-8ce5-ba4354570e1a" Jan 20 09:29:56 crc kubenswrapper[4859]: I0120 09:29:56.456321 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 09:29:56 crc kubenswrapper[4859]: I0120 09:29:56.491999 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 09:29:57 crc kubenswrapper[4859]: E0120 09:29:57.275121 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="1d0077f1-7566-4d6b-8ce5-ba4354570e1a" Jan 20 09:29:58 crc kubenswrapper[4859]: E0120 09:29:58.284007 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="1d0077f1-7566-4d6b-8ce5-ba4354570e1a" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.172946 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms"] Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.174500 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.176956 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.177400 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.189593 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms"] Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.368940 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef5c5b42-a174-4575-b893-91f06451147d-secret-volume\") pod \"collect-profiles-29481690-mtgms\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.369022 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef5c5b42-a174-4575-b893-91f06451147d-config-volume\") pod \"collect-profiles-29481690-mtgms\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.369093 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjj7n\" (UniqueName: \"kubernetes.io/projected/ef5c5b42-a174-4575-b893-91f06451147d-kube-api-access-fjj7n\") pod \"collect-profiles-29481690-mtgms\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.470342 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef5c5b42-a174-4575-b893-91f06451147d-config-volume\") pod \"collect-profiles-29481690-mtgms\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.470450 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjj7n\" (UniqueName: \"kubernetes.io/projected/ef5c5b42-a174-4575-b893-91f06451147d-kube-api-access-fjj7n\") pod \"collect-profiles-29481690-mtgms\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.470498 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef5c5b42-a174-4575-b893-91f06451147d-secret-volume\") pod \"collect-profiles-29481690-mtgms\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.472571 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef5c5b42-a174-4575-b893-91f06451147d-config-volume\") pod \"collect-profiles-29481690-mtgms\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.488543 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef5c5b42-a174-4575-b893-91f06451147d-secret-volume\") pod \"collect-profiles-29481690-mtgms\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.500209 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjj7n\" (UniqueName: \"kubernetes.io/projected/ef5c5b42-a174-4575-b893-91f06451147d-kube-api-access-fjj7n\") pod \"collect-profiles-29481690-mtgms\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:00 crc kubenswrapper[4859]: I0120 09:30:00.795090 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.197009 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms"] Jan 20 09:30:01 crc kubenswrapper[4859]: W0120 09:30:01.202811 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef5c5b42_a174_4575_b893_91f06451147d.slice/crio-9d94792712e0ef5a8ae23672eebbc34ced6f772fccdc524a2d5c3a3f0cc91cc4 WatchSource:0}: Error finding container 9d94792712e0ef5a8ae23672eebbc34ced6f772fccdc524a2d5c3a3f0cc91cc4: Status 404 returned error can't find the container with id 9d94792712e0ef5a8ae23672eebbc34ced6f772fccdc524a2d5c3a3f0cc91cc4 Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.302989 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" event={"ID":"ef5c5b42-a174-4575-b893-91f06451147d","Type":"ContainerStarted","Data":"9d94792712e0ef5a8ae23672eebbc34ced6f772fccdc524a2d5c3a3f0cc91cc4"} Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.505532 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.512230 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.514463 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.514674 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.514708 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.515600 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-z6rqc" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.515875 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.535431 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696321 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696359 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696390 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696420 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696444 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696477 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696494 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696509 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696531 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696555 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696587 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mqb\" (UniqueName: \"kubernetes.io/projected/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-kube-api-access-v8mqb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696604 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.696622 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.797970 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798019 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798046 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798077 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798095 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8mqb\" (UniqueName: \"kubernetes.io/projected/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-kube-api-access-v8mqb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798115 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798135 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798157 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798176 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798222 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798248 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798268 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798301 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.798830 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.799124 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.799205 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.799416 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.799013 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.799635 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.799913 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.799926 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.800214 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.807069 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.810197 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.811373 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.818883 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8mqb\" (UniqueName: \"kubernetes.io/projected/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-kube-api-access-v8mqb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:01 crc kubenswrapper[4859]: I0120 09:30:01.827855 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:02 crc kubenswrapper[4859]: I0120 09:30:02.101453 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 09:30:02 crc kubenswrapper[4859]: I0120 09:30:02.311934 4859 generic.go:334] "Generic (PLEG): container finished" podID="ef5c5b42-a174-4575-b893-91f06451147d" containerID="d3e8ea36f99edac66504e7c522995308f169456f2bc4d50a617dfdfa99e508d7" exitCode=0 Jan 20 09:30:02 crc kubenswrapper[4859]: I0120 09:30:02.311998 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" event={"ID":"ef5c5b42-a174-4575-b893-91f06451147d","Type":"ContainerDied","Data":"d3e8ea36f99edac66504e7c522995308f169456f2bc4d50a617dfdfa99e508d7"} Jan 20 09:30:02 crc kubenswrapper[4859]: I0120 09:30:02.313727 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"18d1274f-2877-4335-a9e5-aa2d6d44e0a1","Type":"ContainerStarted","Data":"6a8417ff460683098787dbcdcd92864248d15c424f38c591627d54fac55efd3f"} Jan 20 09:30:03 crc kubenswrapper[4859]: I0120 09:30:03.584603 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:03 crc kubenswrapper[4859]: I0120 09:30:03.723504 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef5c5b42-a174-4575-b893-91f06451147d-config-volume\") pod \"ef5c5b42-a174-4575-b893-91f06451147d\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " Jan 20 09:30:03 crc kubenswrapper[4859]: I0120 09:30:03.723561 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef5c5b42-a174-4575-b893-91f06451147d-secret-volume\") pod \"ef5c5b42-a174-4575-b893-91f06451147d\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " Jan 20 09:30:03 crc kubenswrapper[4859]: I0120 09:30:03.723615 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjj7n\" (UniqueName: \"kubernetes.io/projected/ef5c5b42-a174-4575-b893-91f06451147d-kube-api-access-fjj7n\") pod \"ef5c5b42-a174-4575-b893-91f06451147d\" (UID: \"ef5c5b42-a174-4575-b893-91f06451147d\") " Jan 20 09:30:03 crc kubenswrapper[4859]: I0120 09:30:03.724403 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef5c5b42-a174-4575-b893-91f06451147d-config-volume" (OuterVolumeSpecName: "config-volume") pod "ef5c5b42-a174-4575-b893-91f06451147d" (UID: "ef5c5b42-a174-4575-b893-91f06451147d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:30:03 crc kubenswrapper[4859]: I0120 09:30:03.725077 4859 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef5c5b42-a174-4575-b893-91f06451147d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:03 crc kubenswrapper[4859]: I0120 09:30:03.729630 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef5c5b42-a174-4575-b893-91f06451147d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ef5c5b42-a174-4575-b893-91f06451147d" (UID: "ef5c5b42-a174-4575-b893-91f06451147d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:30:03 crc kubenswrapper[4859]: I0120 09:30:03.730746 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef5c5b42-a174-4575-b893-91f06451147d-kube-api-access-fjj7n" (OuterVolumeSpecName: "kube-api-access-fjj7n") pod "ef5c5b42-a174-4575-b893-91f06451147d" (UID: "ef5c5b42-a174-4575-b893-91f06451147d"). InnerVolumeSpecName "kube-api-access-fjj7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:30:03 crc kubenswrapper[4859]: I0120 09:30:03.826082 4859 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef5c5b42-a174-4575-b893-91f06451147d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:03 crc kubenswrapper[4859]: I0120 09:30:03.826124 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjj7n\" (UniqueName: \"kubernetes.io/projected/ef5c5b42-a174-4575-b893-91f06451147d-kube-api-access-fjj7n\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:04 crc kubenswrapper[4859]: I0120 09:30:04.326473 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" event={"ID":"ef5c5b42-a174-4575-b893-91f06451147d","Type":"ContainerDied","Data":"9d94792712e0ef5a8ae23672eebbc34ced6f772fccdc524a2d5c3a3f0cc91cc4"} Jan 20 09:30:04 crc kubenswrapper[4859]: I0120 09:30:04.326512 4859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d94792712e0ef5a8ae23672eebbc34ced6f772fccdc524a2d5c3a3f0cc91cc4" Jan 20 09:30:04 crc kubenswrapper[4859]: I0120 09:30:04.326605 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481690-mtgms" Jan 20 09:30:08 crc kubenswrapper[4859]: I0120 09:30:08.355163 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"18d1274f-2877-4335-a9e5-aa2d6d44e0a1","Type":"ContainerStarted","Data":"41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa"} Jan 20 09:30:08 crc kubenswrapper[4859]: E0120 09:30:08.440208 4859 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=2729647850201649141, SKID=, AKID=6D:B6:D6:BE:8A:A4:9F:FE:49:07:28:C9:C4:75:A3:9A:A3:64:1A:49 failed: x509: certificate signed by unknown authority" Jan 20 09:30:09 crc kubenswrapper[4859]: I0120 09:30:09.485100 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.048973 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.049047 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.378022 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" event={"ID":"b5865734-9af3-451c-aa99-fe8ece162b63","Type":"ContainerStarted","Data":"fce2ff3bf22fd0a14e59222ac1d8b231b8d3125fca2bc0762c0d5a2ac90461cb"} Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.378287 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="18d1274f-2877-4335-a9e5-aa2d6d44e0a1" containerName="git-clone" containerID="cri-o://41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa" gracePeriod=30 Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.421355 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-tm5tp" podStartSLOduration=1.20385123 podStartE2EDuration="34.4213366s" podCreationTimestamp="2026-01-20 09:29:36 +0000 UTC" firstStartedPulling="2026-01-20 09:29:36.667474042 +0000 UTC m=+651.423490218" lastFinishedPulling="2026-01-20 09:30:09.884959372 +0000 UTC m=+684.640975588" observedRunningTime="2026-01-20 09:30:10.415737826 +0000 UTC m=+685.171754012" watchObservedRunningTime="2026-01-20 09:30:10.4213366 +0000 UTC m=+685.177352786" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.753688 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_18d1274f-2877-4335-a9e5-aa2d6d44e0a1/git-clone/0.log" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.753776 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.831985 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-pull\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832345 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-push\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832405 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8mqb\" (UniqueName: \"kubernetes.io/projected/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-kube-api-access-v8mqb\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832463 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-root\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832505 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-proxy-ca-bundles\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832560 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-ca-bundles\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832577 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-run\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832595 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildcachedir\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832637 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildworkdir\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832665 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-system-configs\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832684 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-blob-cache\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832717 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.832745 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-node-pullsecrets\") pod \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\" (UID: \"18d1274f-2877-4335-a9e5-aa2d6d44e0a1\") " Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.833154 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.833274 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.833546 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.833774 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.833831 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.833890 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.834089 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.834223 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.834593 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.837892 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.838069 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-pull" (OuterVolumeSpecName: "builder-dockercfg-z6rqc-pull") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "builder-dockercfg-z6rqc-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.838207 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-kube-api-access-v8mqb" (OuterVolumeSpecName: "kube-api-access-v8mqb") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "kube-api-access-v8mqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.838459 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-push" (OuterVolumeSpecName: "builder-dockercfg-z6rqc-push") pod "18d1274f-2877-4335-a9e5-aa2d6d44e0a1" (UID: "18d1274f-2877-4335-a9e5-aa2d6d44e0a1"). InnerVolumeSpecName "builder-dockercfg-z6rqc-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939653 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8mqb\" (UniqueName: \"kubernetes.io/projected/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-kube-api-access-v8mqb\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939698 4859 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939711 4859 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939723 4859 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939734 4859 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939747 4859 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939758 4859 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939769 4859 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939800 4859 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939815 4859 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939827 4859 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939838 4859 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-pull\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:10 crc kubenswrapper[4859]: I0120 09:30:10.939852 4859 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/18d1274f-2877-4335-a9e5-aa2d6d44e0a1-builder-dockercfg-z6rqc-push\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.384507 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1d0077f1-7566-4d6b-8ce5-ba4354570e1a","Type":"ContainerStarted","Data":"7f3a2aaff2e3af5dd7457e30a02033b6d27c4aaaa8730bf6d0f7a7e95030a9dd"} Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.386257 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_18d1274f-2877-4335-a9e5-aa2d6d44e0a1/git-clone/0.log" Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.386302 4859 generic.go:334] "Generic (PLEG): container finished" podID="18d1274f-2877-4335-a9e5-aa2d6d44e0a1" containerID="41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa" exitCode=1 Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.386330 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"18d1274f-2877-4335-a9e5-aa2d6d44e0a1","Type":"ContainerDied","Data":"41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa"} Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.386346 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.386365 4859 scope.go:117] "RemoveContainer" containerID="41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa" Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.386354 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"18d1274f-2877-4335-a9e5-aa2d6d44e0a1","Type":"ContainerDied","Data":"6a8417ff460683098787dbcdcd92864248d15c424f38c591627d54fac55efd3f"} Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.400709 4859 scope.go:117] "RemoveContainer" containerID="41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa" Jan 20 09:30:11 crc kubenswrapper[4859]: E0120 09:30:11.401060 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa\": container with ID starting with 41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa not found: ID does not exist" containerID="41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa" Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.401108 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa"} err="failed to get container status \"41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa\": rpc error: code = NotFound desc = could not find container \"41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa\": container with ID starting with 41acfa6a4e01d8ce83a7459ede4fc3ffcf7054ac87ec87638af495e3939c3faa not found: ID does not exist" Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.432343 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.436479 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 09:30:11 crc kubenswrapper[4859]: I0120 09:30:11.580429 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18d1274f-2877-4335-a9e5-aa2d6d44e0a1" path="/var/lib/kubelet/pods/18d1274f-2877-4335-a9e5-aa2d6d44e0a1/volumes" Jan 20 09:30:12 crc kubenswrapper[4859]: I0120 09:30:12.397098 4859 generic.go:334] "Generic (PLEG): container finished" podID="1d0077f1-7566-4d6b-8ce5-ba4354570e1a" containerID="7f3a2aaff2e3af5dd7457e30a02033b6d27c4aaaa8730bf6d0f7a7e95030a9dd" exitCode=0 Jan 20 09:30:12 crc kubenswrapper[4859]: I0120 09:30:12.397204 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1d0077f1-7566-4d6b-8ce5-ba4354570e1a","Type":"ContainerDied","Data":"7f3a2aaff2e3af5dd7457e30a02033b6d27c4aaaa8730bf6d0f7a7e95030a9dd"} Jan 20 09:30:13 crc kubenswrapper[4859]: I0120 09:30:13.417907 4859 generic.go:334] "Generic (PLEG): container finished" podID="1d0077f1-7566-4d6b-8ce5-ba4354570e1a" containerID="b704f9b764df400d5d05e52a09e451a4733706dcb3da285a19d6a154a950a7f2" exitCode=0 Jan 20 09:30:13 crc kubenswrapper[4859]: I0120 09:30:13.418204 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1d0077f1-7566-4d6b-8ce5-ba4354570e1a","Type":"ContainerDied","Data":"b704f9b764df400d5d05e52a09e451a4733706dcb3da285a19d6a154a950a7f2"} Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.431656 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1d0077f1-7566-4d6b-8ce5-ba4354570e1a","Type":"ContainerStarted","Data":"318ccb2f72bc583999c2c2206b1e011b51429a17584aebeeedca654ae5c75d94"} Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.431916 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.457053 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zzsbb"] Jan 20 09:30:14 crc kubenswrapper[4859]: E0120 09:30:14.457283 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5c5b42-a174-4575-b893-91f06451147d" containerName="collect-profiles" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.457295 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5c5b42-a174-4575-b893-91f06451147d" containerName="collect-profiles" Jan 20 09:30:14 crc kubenswrapper[4859]: E0120 09:30:14.457309 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d1274f-2877-4335-a9e5-aa2d6d44e0a1" containerName="git-clone" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.457315 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d1274f-2877-4335-a9e5-aa2d6d44e0a1" containerName="git-clone" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.457403 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="18d1274f-2877-4335-a9e5-aa2d6d44e0a1" containerName="git-clone" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.457416 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef5c5b42-a174-4575-b893-91f06451147d" containerName="collect-profiles" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.457817 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.461053 4859 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-825lk" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.462591 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.462601 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.475726 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=4.912479573 podStartE2EDuration="41.4757076s" podCreationTimestamp="2026-01-20 09:29:33 +0000 UTC" firstStartedPulling="2026-01-20 09:29:33.802434115 +0000 UTC m=+648.558450331" lastFinishedPulling="2026-01-20 09:30:10.365662142 +0000 UTC m=+685.121678358" observedRunningTime="2026-01-20 09:30:14.47403273 +0000 UTC m=+689.230048906" watchObservedRunningTime="2026-01-20 09:30:14.4757076 +0000 UTC m=+689.231723786" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.485571 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zzsbb"] Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.583975 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4cj6\" (UniqueName: \"kubernetes.io/projected/c8536512-4e62-493c-9447-71eb8841c32f-kube-api-access-l4cj6\") pod \"cert-manager-webhook-f4fb5df64-zzsbb\" (UID: \"c8536512-4e62-493c-9447-71eb8841c32f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.584283 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8536512-4e62-493c-9447-71eb8841c32f-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zzsbb\" (UID: \"c8536512-4e62-493c-9447-71eb8841c32f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.686149 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4cj6\" (UniqueName: \"kubernetes.io/projected/c8536512-4e62-493c-9447-71eb8841c32f-kube-api-access-l4cj6\") pod \"cert-manager-webhook-f4fb5df64-zzsbb\" (UID: \"c8536512-4e62-493c-9447-71eb8841c32f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.686553 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8536512-4e62-493c-9447-71eb8841c32f-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zzsbb\" (UID: \"c8536512-4e62-493c-9447-71eb8841c32f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.704714 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4cj6\" (UniqueName: \"kubernetes.io/projected/c8536512-4e62-493c-9447-71eb8841c32f-kube-api-access-l4cj6\") pod \"cert-manager-webhook-f4fb5df64-zzsbb\" (UID: \"c8536512-4e62-493c-9447-71eb8841c32f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.721874 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8536512-4e62-493c-9447-71eb8841c32f-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zzsbb\" (UID: \"c8536512-4e62-493c-9447-71eb8841c32f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" Jan 20 09:30:14 crc kubenswrapper[4859]: I0120 09:30:14.771086 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" Jan 20 09:30:15 crc kubenswrapper[4859]: I0120 09:30:15.251397 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zzsbb"] Jan 20 09:30:15 crc kubenswrapper[4859]: W0120 09:30:15.257929 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8536512_4e62_493c_9447_71eb8841c32f.slice/crio-7ba116c82bcaae2c89b4a84867db9e7dd7dfab412c06e6341d5f35ab33fa2a8d WatchSource:0}: Error finding container 7ba116c82bcaae2c89b4a84867db9e7dd7dfab412c06e6341d5f35ab33fa2a8d: Status 404 returned error can't find the container with id 7ba116c82bcaae2c89b4a84867db9e7dd7dfab412c06e6341d5f35ab33fa2a8d Jan 20 09:30:15 crc kubenswrapper[4859]: I0120 09:30:15.439222 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" event={"ID":"c8536512-4e62-493c-9447-71eb8841c32f","Type":"ContainerStarted","Data":"7ba116c82bcaae2c89b4a84867db9e7dd7dfab412c06e6341d5f35ab33fa2a8d"} Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.603734 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-n24zn"] Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.604949 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.607342 4859 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ctlvv" Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.611651 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-n24zn"] Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.707409 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e319c6c-4401-4927-be16-26ce497d732a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-n24zn\" (UID: \"3e319c6c-4401-4927-be16-26ce497d732a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.707691 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjpcr\" (UniqueName: \"kubernetes.io/projected/3e319c6c-4401-4927-be16-26ce497d732a-kube-api-access-rjpcr\") pod \"cert-manager-cainjector-855d9ccff4-n24zn\" (UID: \"3e319c6c-4401-4927-be16-26ce497d732a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.808678 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e319c6c-4401-4927-be16-26ce497d732a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-n24zn\" (UID: \"3e319c6c-4401-4927-be16-26ce497d732a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.808776 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjpcr\" (UniqueName: \"kubernetes.io/projected/3e319c6c-4401-4927-be16-26ce497d732a-kube-api-access-rjpcr\") pod \"cert-manager-cainjector-855d9ccff4-n24zn\" (UID: \"3e319c6c-4401-4927-be16-26ce497d732a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.835640 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjpcr\" (UniqueName: \"kubernetes.io/projected/3e319c6c-4401-4927-be16-26ce497d732a-kube-api-access-rjpcr\") pod \"cert-manager-cainjector-855d9ccff4-n24zn\" (UID: \"3e319c6c-4401-4927-be16-26ce497d732a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.848562 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e319c6c-4401-4927-be16-26ce497d732a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-n24zn\" (UID: \"3e319c6c-4401-4927-be16-26ce497d732a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" Jan 20 09:30:16 crc kubenswrapper[4859]: I0120 09:30:16.922337 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" Jan 20 09:30:17 crc kubenswrapper[4859]: I0120 09:30:17.229996 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-n24zn"] Jan 20 09:30:17 crc kubenswrapper[4859]: W0120 09:30:17.244971 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e319c6c_4401_4927_be16_26ce497d732a.slice/crio-4ef9b7da7dceec2f89f9a4e65d5e4f975f97823f94d159c10476a25884a3d8c4 WatchSource:0}: Error finding container 4ef9b7da7dceec2f89f9a4e65d5e4f975f97823f94d159c10476a25884a3d8c4: Status 404 returned error can't find the container with id 4ef9b7da7dceec2f89f9a4e65d5e4f975f97823f94d159c10476a25884a3d8c4 Jan 20 09:30:17 crc kubenswrapper[4859]: I0120 09:30:17.455851 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" event={"ID":"3e319c6c-4401-4927-be16-26ce497d732a","Type":"ContainerStarted","Data":"4ef9b7da7dceec2f89f9a4e65d5e4f975f97823f94d159c10476a25884a3d8c4"} Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.006362 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.008250 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.010912 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.012018 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-ca" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.012049 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-sys-config" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.012058 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-global-ca" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.016227 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-z6rqc" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.083225 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.169996 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hthg4\" (UniqueName: \"kubernetes.io/projected/450a63e1-746d-4208-8c17-bf28374ab82b-kube-api-access-hthg4\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170071 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170101 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170242 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170364 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170439 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170504 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170541 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170589 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170758 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170855 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170924 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.170973 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.272687 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.272743 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.272801 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.272846 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.272865 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.272888 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.272880 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.272906 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.272976 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.273067 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.273084 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.273125 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.273147 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.273237 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hthg4\" (UniqueName: \"kubernetes.io/projected/450a63e1-746d-4208-8c17-bf28374ab82b-kube-api-access-hthg4\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.274142 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.274636 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.274836 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.274900 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.275205 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.275503 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.275966 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.276219 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.278295 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.278820 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.285687 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.287498 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hthg4\" (UniqueName: \"kubernetes.io/projected/450a63e1-746d-4208-8c17-bf28374ab82b-kube-api-access-hthg4\") pod \"service-telemetry-framework-index-2-build\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:21 crc kubenswrapper[4859]: I0120 09:30:21.327089 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:23 crc kubenswrapper[4859]: I0120 09:30:23.644558 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="1d0077f1-7566-4d6b-8ce5-ba4354570e1a" containerName="elasticsearch" probeResult="failure" output=< Jan 20 09:30:23 crc kubenswrapper[4859]: {"timestamp": "2026-01-20T09:30:23+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 20 09:30:23 crc kubenswrapper[4859]: > Jan 20 09:30:26 crc kubenswrapper[4859]: I0120 09:30:26.007495 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Jan 20 09:30:26 crc kubenswrapper[4859]: W0120 09:30:26.014311 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod450a63e1_746d_4208_8c17_bf28374ab82b.slice/crio-daf056652bcee20bc4ba70b4491ff7e7962fef466573f94d22794301fc17902f WatchSource:0}: Error finding container daf056652bcee20bc4ba70b4491ff7e7962fef466573f94d22794301fc17902f: Status 404 returned error can't find the container with id daf056652bcee20bc4ba70b4491ff7e7962fef466573f94d22794301fc17902f Jan 20 09:30:26 crc kubenswrapper[4859]: I0120 09:30:26.536084 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"450a63e1-746d-4208-8c17-bf28374ab82b","Type":"ContainerStarted","Data":"2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42"} Jan 20 09:30:26 crc kubenswrapper[4859]: I0120 09:30:26.536465 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"450a63e1-746d-4208-8c17-bf28374ab82b","Type":"ContainerStarted","Data":"daf056652bcee20bc4ba70b4491ff7e7962fef466573f94d22794301fc17902f"} Jan 20 09:30:26 crc kubenswrapper[4859]: I0120 09:30:26.538088 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" event={"ID":"c8536512-4e62-493c-9447-71eb8841c32f","Type":"ContainerStarted","Data":"02647fdd6e61e627a5040bfa27cc23e79d915d14bda7a3286ceac259058353c3"} Jan 20 09:30:26 crc kubenswrapper[4859]: I0120 09:30:26.538244 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" Jan 20 09:30:26 crc kubenswrapper[4859]: I0120 09:30:26.540225 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" event={"ID":"3e319c6c-4401-4927-be16-26ce497d732a","Type":"ContainerStarted","Data":"3e6189f55c9bcd4655630a46e561261116f8a11cfc800e7b5cd14c5668ed52c7"} Jan 20 09:30:26 crc kubenswrapper[4859]: I0120 09:30:26.587651 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" podStartSLOduration=1.983971453 podStartE2EDuration="12.587632154s" podCreationTimestamp="2026-01-20 09:30:14 +0000 UTC" firstStartedPulling="2026-01-20 09:30:15.263616534 +0000 UTC m=+690.019632710" lastFinishedPulling="2026-01-20 09:30:25.867277235 +0000 UTC m=+700.623293411" observedRunningTime="2026-01-20 09:30:26.586550907 +0000 UTC m=+701.342567083" watchObservedRunningTime="2026-01-20 09:30:26.587632154 +0000 UTC m=+701.343648320" Jan 20 09:30:26 crc kubenswrapper[4859]: E0120 09:30:26.593241 4859 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=2729647850201649141, SKID=, AKID=6D:B6:D6:BE:8A:A4:9F:FE:49:07:28:C9:C4:75:A3:9A:A3:64:1A:49 failed: x509: certificate signed by unknown authority" Jan 20 09:30:26 crc kubenswrapper[4859]: I0120 09:30:26.610623 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-n24zn" podStartSLOduration=1.958822126 podStartE2EDuration="10.610609395s" podCreationTimestamp="2026-01-20 09:30:16 +0000 UTC" firstStartedPulling="2026-01-20 09:30:17.248741744 +0000 UTC m=+692.004757930" lastFinishedPulling="2026-01-20 09:30:25.900529023 +0000 UTC m=+700.656545199" observedRunningTime="2026-01-20 09:30:26.609020067 +0000 UTC m=+701.365036253" watchObservedRunningTime="2026-01-20 09:30:26.610609395 +0000 UTC m=+701.366625571" Jan 20 09:30:27 crc kubenswrapper[4859]: I0120 09:30:27.621932 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Jan 20 09:30:28 crc kubenswrapper[4859]: I0120 09:30:28.554403 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="450a63e1-746d-4208-8c17-bf28374ab82b" containerName="git-clone" containerID="cri-o://2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42" gracePeriod=30 Jan 20 09:30:28 crc kubenswrapper[4859]: I0120 09:30:28.630855 4859 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="1d0077f1-7566-4d6b-8ce5-ba4354570e1a" containerName="elasticsearch" probeResult="failure" output=< Jan 20 09:30:28 crc kubenswrapper[4859]: {"timestamp": "2026-01-20T09:30:28+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 20 09:30:28 crc kubenswrapper[4859]: > Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.471310 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_450a63e1-746d-4208-8c17-bf28374ab82b/git-clone/0.log" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.471628 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.560052 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_450a63e1-746d-4208-8c17-bf28374ab82b/git-clone/0.log" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.560098 4859 generic.go:334] "Generic (PLEG): container finished" podID="450a63e1-746d-4208-8c17-bf28374ab82b" containerID="2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42" exitCode=1 Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.560122 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"450a63e1-746d-4208-8c17-bf28374ab82b","Type":"ContainerDied","Data":"2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42"} Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.560144 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"450a63e1-746d-4208-8c17-bf28374ab82b","Type":"ContainerDied","Data":"daf056652bcee20bc4ba70b4491ff7e7962fef466573f94d22794301fc17902f"} Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.560159 4859 scope.go:117] "RemoveContainer" containerID="2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.560163 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.579101 4859 scope.go:117] "RemoveContainer" containerID="2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42" Jan 20 09:30:29 crc kubenswrapper[4859]: E0120 09:30:29.580231 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42\": container with ID starting with 2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42 not found: ID does not exist" containerID="2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.580267 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42"} err="failed to get container status \"2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42\": rpc error: code = NotFound desc = could not find container \"2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42\": container with ID starting with 2e8d523b02b1fbb0140f61b509f2bab011eec7c0a26c9f2e9cbc0c8ce4f2ed42 not found: ID does not exist" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.598902 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-build-blob-cache\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.598943 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-buildworkdir\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.598996 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-system-configs\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599013 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-ca-bundles\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599046 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-run\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599096 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-root\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599131 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hthg4\" (UniqueName: \"kubernetes.io/projected/450a63e1-746d-4208-8c17-bf28374ab82b-kube-api-access-hthg4\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599150 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599169 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-buildcachedir\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599183 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-push\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599205 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-pull\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599220 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-proxy-ca-bundles\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599257 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-node-pullsecrets\") pod \"450a63e1-746d-4208-8c17-bf28374ab82b\" (UID: \"450a63e1-746d-4208-8c17-bf28374ab82b\") " Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599504 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599692 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.599888 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.600079 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.600389 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.600509 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.600625 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.601375 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.601673 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.606114 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.618509 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-pull" (OuterVolumeSpecName: "builder-dockercfg-z6rqc-pull") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "builder-dockercfg-z6rqc-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.618561 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450a63e1-746d-4208-8c17-bf28374ab82b-kube-api-access-hthg4" (OuterVolumeSpecName: "kube-api-access-hthg4") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "kube-api-access-hthg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.619348 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-push" (OuterVolumeSpecName: "builder-dockercfg-z6rqc-push") pod "450a63e1-746d-4208-8c17-bf28374ab82b" (UID: "450a63e1-746d-4208-8c17-bf28374ab82b"). InnerVolumeSpecName "builder-dockercfg-z6rqc-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700691 4859 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700731 4859 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700744 4859 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700755 4859 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700769 4859 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700799 4859 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700810 4859 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/450a63e1-746d-4208-8c17-bf28374ab82b-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700821 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hthg4\" (UniqueName: \"kubernetes.io/projected/450a63e1-746d-4208-8c17-bf28374ab82b-kube-api-access-hthg4\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700833 4859 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700845 4859 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/450a63e1-746d-4208-8c17-bf28374ab82b-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700857 4859 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-push\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700868 4859 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/450a63e1-746d-4208-8c17-bf28374ab82b-builder-dockercfg-z6rqc-pull\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.700881 4859 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/450a63e1-746d-4208-8c17-bf28374ab82b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.890955 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Jan 20 09:30:29 crc kubenswrapper[4859]: I0120 09:30:29.896365 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Jan 20 09:30:31 crc kubenswrapper[4859]: I0120 09:30:31.579896 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450a63e1-746d-4208-8c17-bf28374ab82b" path="/var/lib/kubelet/pods/450a63e1-746d-4208-8c17-bf28374ab82b/volumes" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.351983 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-fw68s"] Jan 20 09:30:33 crc kubenswrapper[4859]: E0120 09:30:33.352571 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450a63e1-746d-4208-8c17-bf28374ab82b" containerName="git-clone" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.352591 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="450a63e1-746d-4208-8c17-bf28374ab82b" containerName="git-clone" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.352758 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="450a63e1-746d-4208-8c17-bf28374ab82b" containerName="git-clone" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.353336 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-fw68s" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.360017 4859 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-dslk6" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.373742 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-fw68s"] Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.449930 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229nv\" (UniqueName: \"kubernetes.io/projected/cf5440c8-2dc3-4a2e-8a26-11592e8e38ed-kube-api-access-229nv\") pod \"cert-manager-86cb77c54b-fw68s\" (UID: \"cf5440c8-2dc3-4a2e-8a26-11592e8e38ed\") " pod="cert-manager/cert-manager-86cb77c54b-fw68s" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.450006 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf5440c8-2dc3-4a2e-8a26-11592e8e38ed-bound-sa-token\") pod \"cert-manager-86cb77c54b-fw68s\" (UID: \"cf5440c8-2dc3-4a2e-8a26-11592e8e38ed\") " pod="cert-manager/cert-manager-86cb77c54b-fw68s" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.551186 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf5440c8-2dc3-4a2e-8a26-11592e8e38ed-bound-sa-token\") pod \"cert-manager-86cb77c54b-fw68s\" (UID: \"cf5440c8-2dc3-4a2e-8a26-11592e8e38ed\") " pod="cert-manager/cert-manager-86cb77c54b-fw68s" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.551384 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229nv\" (UniqueName: \"kubernetes.io/projected/cf5440c8-2dc3-4a2e-8a26-11592e8e38ed-kube-api-access-229nv\") pod \"cert-manager-86cb77c54b-fw68s\" (UID: \"cf5440c8-2dc3-4a2e-8a26-11592e8e38ed\") " pod="cert-manager/cert-manager-86cb77c54b-fw68s" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.591605 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf5440c8-2dc3-4a2e-8a26-11592e8e38ed-bound-sa-token\") pod \"cert-manager-86cb77c54b-fw68s\" (UID: \"cf5440c8-2dc3-4a2e-8a26-11592e8e38ed\") " pod="cert-manager/cert-manager-86cb77c54b-fw68s" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.595652 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229nv\" (UniqueName: \"kubernetes.io/projected/cf5440c8-2dc3-4a2e-8a26-11592e8e38ed-kube-api-access-229nv\") pod \"cert-manager-86cb77c54b-fw68s\" (UID: \"cf5440c8-2dc3-4a2e-8a26-11592e8e38ed\") " pod="cert-manager/cert-manager-86cb77c54b-fw68s" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.671997 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-fw68s" Jan 20 09:30:33 crc kubenswrapper[4859]: I0120 09:30:33.901503 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 09:30:34 crc kubenswrapper[4859]: I0120 09:30:34.151759 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-fw68s"] Jan 20 09:30:34 crc kubenswrapper[4859]: W0120 09:30:34.158800 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf5440c8_2dc3_4a2e_8a26_11592e8e38ed.slice/crio-0f2c056a5ea3eb190ec96fe188ee4cbd770bb4e92a16dfe8448e2fa9cdfc9244 WatchSource:0}: Error finding container 0f2c056a5ea3eb190ec96fe188ee4cbd770bb4e92a16dfe8448e2fa9cdfc9244: Status 404 returned error can't find the container with id 0f2c056a5ea3eb190ec96fe188ee4cbd770bb4e92a16dfe8448e2fa9cdfc9244 Jan 20 09:30:34 crc kubenswrapper[4859]: I0120 09:30:34.597819 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-fw68s" event={"ID":"cf5440c8-2dc3-4a2e-8a26-11592e8e38ed","Type":"ContainerStarted","Data":"c075b9f1b4f51dd0f4115034a5ebc96b154ff0e71502109ac1dc524fd00d8172"} Jan 20 09:30:34 crc kubenswrapper[4859]: I0120 09:30:34.598346 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-fw68s" event={"ID":"cf5440c8-2dc3-4a2e-8a26-11592e8e38ed","Type":"ContainerStarted","Data":"0f2c056a5ea3eb190ec96fe188ee4cbd770bb4e92a16dfe8448e2fa9cdfc9244"} Jan 20 09:30:34 crc kubenswrapper[4859]: I0120 09:30:34.618808 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-fw68s" podStartSLOduration=1.618776091 podStartE2EDuration="1.618776091s" podCreationTimestamp="2026-01-20 09:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:30:34.617556492 +0000 UTC m=+709.373572758" watchObservedRunningTime="2026-01-20 09:30:34.618776091 +0000 UTC m=+709.374792267" Jan 20 09:30:34 crc kubenswrapper[4859]: I0120 09:30:34.775998 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-zzsbb" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.176592 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.179169 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.182411 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-ca" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.182647 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-sys-config" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.182988 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-z6rqc" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.183138 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-global-ca" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.186458 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.226068 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.240685 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.240863 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.240908 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g48xw\" (UniqueName: \"kubernetes.io/projected/f9891dd2-489e-414e-89cd-63c80eac0ce6-kube-api-access-g48xw\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.240946 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.241037 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.241076 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.241119 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.241181 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.241236 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.241274 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.241304 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.241340 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.241406 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.343276 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.343391 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.343427 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g48xw\" (UniqueName: \"kubernetes.io/projected/f9891dd2-489e-414e-89cd-63c80eac0ce6-kube-api-access-g48xw\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.343475 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.343497 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.343584 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.344356 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.345342 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.345439 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.345504 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.345594 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.345674 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.345714 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.345743 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.345847 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.345923 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.345944 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.347753 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.347955 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.348119 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.350945 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.352822 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.353584 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.371681 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.376777 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.389658 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g48xw\" (UniqueName: \"kubernetes.io/projected/f9891dd2-489e-414e-89cd-63c80eac0ce6-kube-api-access-g48xw\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.513740 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:39 crc kubenswrapper[4859]: I0120 09:30:39.819344 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Jan 20 09:30:40 crc kubenswrapper[4859]: I0120 09:30:40.048147 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:30:40 crc kubenswrapper[4859]: I0120 09:30:40.048228 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:30:40 crc kubenswrapper[4859]: I0120 09:30:40.650699 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"f9891dd2-489e-414e-89cd-63c80eac0ce6","Type":"ContainerStarted","Data":"ae7c7effd9354fae11a28b94cef9e80481bbc182492ab0042b26b77e2fe62e7c"} Jan 20 09:30:42 crc kubenswrapper[4859]: I0120 09:30:42.665308 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"f9891dd2-489e-414e-89cd-63c80eac0ce6","Type":"ContainerStarted","Data":"cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8"} Jan 20 09:30:42 crc kubenswrapper[4859]: E0120 09:30:42.733610 4859 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=2729647850201649141, SKID=, AKID=6D:B6:D6:BE:8A:A4:9F:FE:49:07:28:C9:C4:75:A3:9A:A3:64:1A:49 failed: x509: certificate signed by unknown authority" Jan 20 09:30:43 crc kubenswrapper[4859]: I0120 09:30:43.766972 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Jan 20 09:30:44 crc kubenswrapper[4859]: I0120 09:30:44.681419 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-3-build" podUID="f9891dd2-489e-414e-89cd-63c80eac0ce6" containerName="git-clone" containerID="cri-o://cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8" gracePeriod=30 Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.170754 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_f9891dd2-489e-414e-89cd-63c80eac0ce6/git-clone/0.log" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.171148 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.238725 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildworkdir\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.238855 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-proxy-ca-bundles\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.238899 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-ca-bundles\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.238936 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-blob-cache\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.238993 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g48xw\" (UniqueName: \"kubernetes.io/projected/f9891dd2-489e-414e-89cd-63c80eac0ce6-kube-api-access-g48xw\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.239064 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildcachedir\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.239094 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-node-pullsecrets\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.239150 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-system-configs\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.239184 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-push\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.239228 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-pull\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.239269 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.239321 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-root\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.239355 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-run\") pod \"f9891dd2-489e-414e-89cd-63c80eac0ce6\" (UID: \"f9891dd2-489e-414e-89cd-63c80eac0ce6\") " Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.240177 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.240921 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.241838 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.241890 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.242451 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.242907 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.243007 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.243171 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.243490 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.249548 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-push" (OuterVolumeSpecName: "builder-dockercfg-z6rqc-push") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "builder-dockercfg-z6rqc-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.249616 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-pull" (OuterVolumeSpecName: "builder-dockercfg-z6rqc-pull") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "builder-dockercfg-z6rqc-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.250714 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9891dd2-489e-414e-89cd-63c80eac0ce6-kube-api-access-g48xw" (OuterVolumeSpecName: "kube-api-access-g48xw") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "kube-api-access-g48xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.252225 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "f9891dd2-489e-414e-89cd-63c80eac0ce6" (UID: "f9891dd2-489e-414e-89cd-63c80eac0ce6"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341580 4859 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341621 4859 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341654 4859 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341673 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g48xw\" (UniqueName: \"kubernetes.io/projected/f9891dd2-489e-414e-89cd-63c80eac0ce6-kube-api-access-g48xw\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341685 4859 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341697 4859 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9891dd2-489e-414e-89cd-63c80eac0ce6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341708 4859 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f9891dd2-489e-414e-89cd-63c80eac0ce6-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341718 4859 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-push\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341730 4859 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-builder-dockercfg-z6rqc-pull\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341742 4859 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f9891dd2-489e-414e-89cd-63c80eac0ce6-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341755 4859 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341766 4859 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.341797 4859 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f9891dd2-489e-414e-89cd-63c80eac0ce6-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.693765 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_f9891dd2-489e-414e-89cd-63c80eac0ce6/git-clone/0.log" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.693901 4859 generic.go:334] "Generic (PLEG): container finished" podID="f9891dd2-489e-414e-89cd-63c80eac0ce6" containerID="cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8" exitCode=1 Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.693953 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"f9891dd2-489e-414e-89cd-63c80eac0ce6","Type":"ContainerDied","Data":"cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8"} Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.694010 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"f9891dd2-489e-414e-89cd-63c80eac0ce6","Type":"ContainerDied","Data":"ae7c7effd9354fae11a28b94cef9e80481bbc182492ab0042b26b77e2fe62e7c"} Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.694034 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.694048 4859 scope.go:117] "RemoveContainer" containerID="cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.736830 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.738647 4859 scope.go:117] "RemoveContainer" containerID="cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8" Jan 20 09:30:45 crc kubenswrapper[4859]: E0120 09:30:45.739442 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8\": container with ID starting with cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8 not found: ID does not exist" containerID="cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.739516 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8"} err="failed to get container status \"cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8\": rpc error: code = NotFound desc = could not find container \"cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8\": container with ID starting with cb2294af59c35cffcde6baa1cbbb94cb619f2832df4eedda53dbcf59626b93f8 not found: ID does not exist" Jan 20 09:30:45 crc kubenswrapper[4859]: I0120 09:30:45.742870 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Jan 20 09:30:47 crc kubenswrapper[4859]: I0120 09:30:47.583841 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9891dd2-489e-414e-89cd-63c80eac0ce6" path="/var/lib/kubelet/pods/f9891dd2-489e-414e-89cd-63c80eac0ce6/volumes" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.235140 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Jan 20 09:30:55 crc kubenswrapper[4859]: E0120 09:30:55.235843 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9891dd2-489e-414e-89cd-63c80eac0ce6" containerName="git-clone" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.235864 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9891dd2-489e-414e-89cd-63c80eac0ce6" containerName="git-clone" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.236012 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9891dd2-489e-414e-89cd-63c80eac0ce6" containerName="git-clone" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.237106 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.240415 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-4-ca" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.242176 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-4-global-ca" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.242976 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-4-sys-config" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.242978 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.243585 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-z6rqc" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.262887 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283069 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283112 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283137 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283160 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283178 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283197 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283232 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283261 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283278 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283296 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgcwk\" (UniqueName: \"kubernetes.io/projected/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-kube-api-access-zgcwk\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283317 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283334 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.283350 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385136 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385256 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385318 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385383 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgcwk\" (UniqueName: \"kubernetes.io/projected/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-kube-api-access-zgcwk\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385404 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385449 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385503 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385555 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385630 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385669 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385722 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385828 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385867 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.385910 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.386044 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.386238 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.386616 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.386635 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.386756 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.387034 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.387207 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.387408 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.393297 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.398754 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.407483 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.412814 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgcwk\" (UniqueName: \"kubernetes.io/projected/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-kube-api-access-zgcwk\") pod \"service-telemetry-framework-index-4-build\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.588022 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:30:55 crc kubenswrapper[4859]: I0120 09:30:55.845273 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Jan 20 09:30:55 crc kubenswrapper[4859]: W0120 09:30:55.850923 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40bbf81_5ecd_4ae6_bf9e_46196eea56ee.slice/crio-b2bc546410cdfcb6b761459cde3e31b3303296985c0844f8572012d4c2d00cb4 WatchSource:0}: Error finding container b2bc546410cdfcb6b761459cde3e31b3303296985c0844f8572012d4c2d00cb4: Status 404 returned error can't find the container with id b2bc546410cdfcb6b761459cde3e31b3303296985c0844f8572012d4c2d00cb4 Jan 20 09:30:56 crc kubenswrapper[4859]: I0120 09:30:56.787396 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee","Type":"ContainerStarted","Data":"b2bc546410cdfcb6b761459cde3e31b3303296985c0844f8572012d4c2d00cb4"} Jan 20 09:31:00 crc kubenswrapper[4859]: I0120 09:31:00.825555 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee","Type":"ContainerStarted","Data":"44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1"} Jan 20 09:31:03 crc kubenswrapper[4859]: E0120 09:31:03.925450 4859 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=2729647850201649141, SKID=, AKID=6D:B6:D6:BE:8A:A4:9F:FE:49:07:28:C9:C4:75:A3:9A:A3:64:1A:49 failed: x509: certificate signed by unknown authority" Jan 20 09:31:04 crc kubenswrapper[4859]: I0120 09:31:04.990137 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Jan 20 09:31:04 crc kubenswrapper[4859]: I0120 09:31:04.990345 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-4-build" podUID="c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" containerName="git-clone" containerID="cri-o://44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1" gracePeriod=30 Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.416957 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_c40bbf81-5ecd-4ae6-bf9e-46196eea56ee/git-clone/0.log" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.417098 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.575485 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-blob-cache\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.575981 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-root\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.576097 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.576249 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.576388 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-run\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.576726 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.577063 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-node-pullsecrets\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.577243 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildcachedir\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.577421 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgcwk\" (UniqueName: \"kubernetes.io/projected/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-kube-api-access-zgcwk\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.577680 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-system-configs\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.577930 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildworkdir\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.578125 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-proxy-ca-bundles\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.578292 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-ca-bundles\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.578425 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-push\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.578596 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-pull\") pod \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\" (UID: \"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee\") " Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.576528 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.577176 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.577286 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.578445 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.578945 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.580002 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.580724 4859 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.580766 4859 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.580812 4859 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.580834 4859 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.580852 4859 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.580871 4859 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.580889 4859 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.580906 4859 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.581317 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.583547 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-kube-api-access-zgcwk" (OuterVolumeSpecName: "kube-api-access-zgcwk") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "kube-api-access-zgcwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.583717 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.584546 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-push" (OuterVolumeSpecName: "builder-dockercfg-z6rqc-push") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "builder-dockercfg-z6rqc-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.586951 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-pull" (OuterVolumeSpecName: "builder-dockercfg-z6rqc-pull") pod "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" (UID: "c40bbf81-5ecd-4ae6-bf9e-46196eea56ee"). InnerVolumeSpecName "builder-dockercfg-z6rqc-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.683563 4859 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.683642 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgcwk\" (UniqueName: \"kubernetes.io/projected/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-kube-api-access-zgcwk\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.683664 4859 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.683678 4859 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-z6rqc-push\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-push\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.683693 4859 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-z6rqc-pull\" (UniqueName: \"kubernetes.io/secret/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee-builder-dockercfg-z6rqc-pull\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.874338 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_c40bbf81-5ecd-4ae6-bf9e-46196eea56ee/git-clone/0.log" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.874417 4859 generic.go:334] "Generic (PLEG): container finished" podID="c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" containerID="44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1" exitCode=1 Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.874460 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee","Type":"ContainerDied","Data":"44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1"} Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.874506 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"c40bbf81-5ecd-4ae6-bf9e-46196eea56ee","Type":"ContainerDied","Data":"b2bc546410cdfcb6b761459cde3e31b3303296985c0844f8572012d4c2d00cb4"} Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.874556 4859 scope.go:117] "RemoveContainer" containerID="44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.874577 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.904143 4859 scope.go:117] "RemoveContainer" containerID="44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1" Jan 20 09:31:05 crc kubenswrapper[4859]: E0120 09:31:05.904649 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1\": container with ID starting with 44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1 not found: ID does not exist" containerID="44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.904700 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1"} err="failed to get container status \"44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1\": rpc error: code = NotFound desc = could not find container \"44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1\": container with ID starting with 44ce5000b189f62a613a3210095a06b5a360a74ec9d5a6d6a5f386e3f3f5b5c1 not found: ID does not exist" Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.923259 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Jan 20 09:31:05 crc kubenswrapper[4859]: I0120 09:31:05.932164 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.271498 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-d9qfv"] Jan 20 09:31:06 crc kubenswrapper[4859]: E0120 09:31:06.272094 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" containerName="git-clone" Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.272121 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" containerName="git-clone" Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.272330 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" containerName="git-clone" Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.273128 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-d9qfv" Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.276295 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-d9qfv"] Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.277380 4859 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-7k28k" Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.397555 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bxxr\" (UniqueName: \"kubernetes.io/projected/8b4b14c2-2b30-4a24-8f5b-428fe68e1317-kube-api-access-9bxxr\") pod \"infrawatch-operators-d9qfv\" (UID: \"8b4b14c2-2b30-4a24-8f5b-428fe68e1317\") " pod="service-telemetry/infrawatch-operators-d9qfv" Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.498914 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bxxr\" (UniqueName: \"kubernetes.io/projected/8b4b14c2-2b30-4a24-8f5b-428fe68e1317-kube-api-access-9bxxr\") pod \"infrawatch-operators-d9qfv\" (UID: \"8b4b14c2-2b30-4a24-8f5b-428fe68e1317\") " pod="service-telemetry/infrawatch-operators-d9qfv" Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.534725 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bxxr\" (UniqueName: \"kubernetes.io/projected/8b4b14c2-2b30-4a24-8f5b-428fe68e1317-kube-api-access-9bxxr\") pod \"infrawatch-operators-d9qfv\" (UID: \"8b4b14c2-2b30-4a24-8f5b-428fe68e1317\") " pod="service-telemetry/infrawatch-operators-d9qfv" Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.600396 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-d9qfv" Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.844285 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-d9qfv"] Jan 20 09:31:06 crc kubenswrapper[4859]: I0120 09:31:06.885730 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-d9qfv" event={"ID":"8b4b14c2-2b30-4a24-8f5b-428fe68e1317","Type":"ContainerStarted","Data":"c778bf9a27ebd9634e7f1b93ae92f7a8076a5082f8381c8d7eb1e5fa5fbfae2e"} Jan 20 09:31:06 crc kubenswrapper[4859]: E0120 09:31:06.913281 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:31:06 crc kubenswrapper[4859]: E0120 09:31:06.913501 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bxxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-d9qfv_service-telemetry(8b4b14c2-2b30-4a24-8f5b-428fe68e1317): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:31:06 crc kubenswrapper[4859]: E0120 09:31:06.915455 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-d9qfv" podUID="8b4b14c2-2b30-4a24-8f5b-428fe68e1317" Jan 20 09:31:07 crc kubenswrapper[4859]: I0120 09:31:07.582326 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40bbf81-5ecd-4ae6-bf9e-46196eea56ee" path="/var/lib/kubelet/pods/c40bbf81-5ecd-4ae6-bf9e-46196eea56ee/volumes" Jan 20 09:31:07 crc kubenswrapper[4859]: E0120 09:31:07.897102 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-d9qfv" podUID="8b4b14c2-2b30-4a24-8f5b-428fe68e1317" Jan 20 09:31:10 crc kubenswrapper[4859]: I0120 09:31:10.048912 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:31:10 crc kubenswrapper[4859]: I0120 09:31:10.049307 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:31:10 crc kubenswrapper[4859]: I0120 09:31:10.049389 4859 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:31:10 crc kubenswrapper[4859]: I0120 09:31:10.050350 4859 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d0d88de4d76d095e717392508318300a2f04b7c83aa49cc8a2f84ff71267e9f"} pod="openshift-machine-config-operator/machine-config-daemon-knvgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:31:10 crc kubenswrapper[4859]: I0120 09:31:10.050444 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" containerID="cri-o://2d0d88de4d76d095e717392508318300a2f04b7c83aa49cc8a2f84ff71267e9f" gracePeriod=600 Jan 20 09:31:10 crc kubenswrapper[4859]: I0120 09:31:10.857004 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-d9qfv"] Jan 20 09:31:10 crc kubenswrapper[4859]: I0120 09:31:10.918524 4859 generic.go:334] "Generic (PLEG): container finished" podID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerID="2d0d88de4d76d095e717392508318300a2f04b7c83aa49cc8a2f84ff71267e9f" exitCode=0 Jan 20 09:31:10 crc kubenswrapper[4859]: I0120 09:31:10.918565 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerDied","Data":"2d0d88de4d76d095e717392508318300a2f04b7c83aa49cc8a2f84ff71267e9f"} Jan 20 09:31:10 crc kubenswrapper[4859]: I0120 09:31:10.918591 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerStarted","Data":"696c0fe563bd554bf48f3eac26708b2138633993fdad488085e64f3c0dee1432"} Jan 20 09:31:10 crc kubenswrapper[4859]: I0120 09:31:10.918606 4859 scope.go:117] "RemoveContainer" containerID="c4f1b1333bee42b774a64c7b97e32fd3c79bdf7bb73ec034abbbed56ca3dcb79" Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.146327 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-d9qfv" Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.266213 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bxxr\" (UniqueName: \"kubernetes.io/projected/8b4b14c2-2b30-4a24-8f5b-428fe68e1317-kube-api-access-9bxxr\") pod \"8b4b14c2-2b30-4a24-8f5b-428fe68e1317\" (UID: \"8b4b14c2-2b30-4a24-8f5b-428fe68e1317\") " Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.274847 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b4b14c2-2b30-4a24-8f5b-428fe68e1317-kube-api-access-9bxxr" (OuterVolumeSpecName: "kube-api-access-9bxxr") pod "8b4b14c2-2b30-4a24-8f5b-428fe68e1317" (UID: "8b4b14c2-2b30-4a24-8f5b-428fe68e1317"). InnerVolumeSpecName "kube-api-access-9bxxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.368046 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bxxr\" (UniqueName: \"kubernetes.io/projected/8b4b14c2-2b30-4a24-8f5b-428fe68e1317-kube-api-access-9bxxr\") on node \"crc\" DevicePath \"\"" Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.687191 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-pr2xt"] Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.688900 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pr2xt" Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.694721 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-pr2xt"] Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.875917 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwpzk\" (UniqueName: \"kubernetes.io/projected/7ab9b124-5d3b-4d56-b1c8-ab68152a2e39-kube-api-access-cwpzk\") pod \"infrawatch-operators-pr2xt\" (UID: \"7ab9b124-5d3b-4d56-b1c8-ab68152a2e39\") " pod="service-telemetry/infrawatch-operators-pr2xt" Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.925278 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-d9qfv" event={"ID":"8b4b14c2-2b30-4a24-8f5b-428fe68e1317","Type":"ContainerDied","Data":"c778bf9a27ebd9634e7f1b93ae92f7a8076a5082f8381c8d7eb1e5fa5fbfae2e"} Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.925332 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-d9qfv" Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.971802 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-d9qfv"] Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.977220 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwpzk\" (UniqueName: \"kubernetes.io/projected/7ab9b124-5d3b-4d56-b1c8-ab68152a2e39-kube-api-access-cwpzk\") pod \"infrawatch-operators-pr2xt\" (UID: \"7ab9b124-5d3b-4d56-b1c8-ab68152a2e39\") " pod="service-telemetry/infrawatch-operators-pr2xt" Jan 20 09:31:11 crc kubenswrapper[4859]: I0120 09:31:11.978082 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-d9qfv"] Jan 20 09:31:12 crc kubenswrapper[4859]: I0120 09:31:12.003006 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwpzk\" (UniqueName: \"kubernetes.io/projected/7ab9b124-5d3b-4d56-b1c8-ab68152a2e39-kube-api-access-cwpzk\") pod \"infrawatch-operators-pr2xt\" (UID: \"7ab9b124-5d3b-4d56-b1c8-ab68152a2e39\") " pod="service-telemetry/infrawatch-operators-pr2xt" Jan 20 09:31:12 crc kubenswrapper[4859]: I0120 09:31:12.010648 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pr2xt" Jan 20 09:31:12 crc kubenswrapper[4859]: I0120 09:31:12.227082 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-pr2xt"] Jan 20 09:31:12 crc kubenswrapper[4859]: E0120 09:31:12.264808 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:31:12 crc kubenswrapper[4859]: E0120 09:31:12.265392 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwpzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-pr2xt_service-telemetry(7ab9b124-5d3b-4d56-b1c8-ab68152a2e39): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:31:12 crc kubenswrapper[4859]: E0120 09:31:12.266753 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:31:12 crc kubenswrapper[4859]: I0120 09:31:12.937242 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pr2xt" event={"ID":"7ab9b124-5d3b-4d56-b1c8-ab68152a2e39","Type":"ContainerStarted","Data":"27e5892a5ad9bad5409889baf31df8cb7580e9c51e39a947eb8bb6fac5e2aebd"} Jan 20 09:31:12 crc kubenswrapper[4859]: E0120 09:31:12.938912 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:31:13 crc kubenswrapper[4859]: I0120 09:31:13.585054 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b4b14c2-2b30-4a24-8f5b-428fe68e1317" path="/var/lib/kubelet/pods/8b4b14c2-2b30-4a24-8f5b-428fe68e1317/volumes" Jan 20 09:31:13 crc kubenswrapper[4859]: E0120 09:31:13.946522 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:31:17 crc kubenswrapper[4859]: I0120 09:31:17.535271 4859 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 09:31:28 crc kubenswrapper[4859]: E0120 09:31:28.628325 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:31:28 crc kubenswrapper[4859]: E0120 09:31:28.629347 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwpzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-pr2xt_service-telemetry(7ab9b124-5d3b-4d56-b1c8-ab68152a2e39): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:31:28 crc kubenswrapper[4859]: E0120 09:31:28.631253 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:31:43 crc kubenswrapper[4859]: E0120 09:31:43.576832 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:31:57 crc kubenswrapper[4859]: E0120 09:31:57.626705 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:31:57 crc kubenswrapper[4859]: E0120 09:31:57.627373 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwpzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-pr2xt_service-telemetry(7ab9b124-5d3b-4d56-b1c8-ab68152a2e39): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:31:57 crc kubenswrapper[4859]: E0120 09:31:57.628652 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:32:09 crc kubenswrapper[4859]: E0120 09:32:09.575770 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:32:20 crc kubenswrapper[4859]: E0120 09:32:20.577347 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:32:32 crc kubenswrapper[4859]: E0120 09:32:32.576749 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:32:43 crc kubenswrapper[4859]: E0120 09:32:43.623954 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:32:43 crc kubenswrapper[4859]: E0120 09:32:43.624722 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwpzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-pr2xt_service-telemetry(7ab9b124-5d3b-4d56-b1c8-ab68152a2e39): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:32:43 crc kubenswrapper[4859]: E0120 09:32:43.626077 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:32:56 crc kubenswrapper[4859]: E0120 09:32:56.575886 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:33:09 crc kubenswrapper[4859]: E0120 09:33:09.579085 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:33:10 crc kubenswrapper[4859]: I0120 09:33:10.048290 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:33:10 crc kubenswrapper[4859]: I0120 09:33:10.048721 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.529400 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zwr8t"] Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.532710 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.549183 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zwr8t"] Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.634030 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-utilities\") pod \"community-operators-zwr8t\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.634323 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlbbz\" (UniqueName: \"kubernetes.io/projected/ab1b494b-c290-4e91-80f0-06ab26e0236e-kube-api-access-mlbbz\") pod \"community-operators-zwr8t\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.634495 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-catalog-content\") pod \"community-operators-zwr8t\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.735239 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-catalog-content\") pod \"community-operators-zwr8t\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.735337 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-utilities\") pod \"community-operators-zwr8t\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.735391 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlbbz\" (UniqueName: \"kubernetes.io/projected/ab1b494b-c290-4e91-80f0-06ab26e0236e-kube-api-access-mlbbz\") pod \"community-operators-zwr8t\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.736008 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-utilities\") pod \"community-operators-zwr8t\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.736197 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-catalog-content\") pod \"community-operators-zwr8t\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.799678 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlbbz\" (UniqueName: \"kubernetes.io/projected/ab1b494b-c290-4e91-80f0-06ab26e0236e-kube-api-access-mlbbz\") pod \"community-operators-zwr8t\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:16 crc kubenswrapper[4859]: I0120 09:33:16.879178 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:17 crc kubenswrapper[4859]: I0120 09:33:17.093278 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zwr8t"] Jan 20 09:33:17 crc kubenswrapper[4859]: I0120 09:33:17.938664 4859 generic.go:334] "Generic (PLEG): container finished" podID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerID="82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8" exitCode=0 Jan 20 09:33:17 crc kubenswrapper[4859]: I0120 09:33:17.938745 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwr8t" event={"ID":"ab1b494b-c290-4e91-80f0-06ab26e0236e","Type":"ContainerDied","Data":"82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8"} Jan 20 09:33:17 crc kubenswrapper[4859]: I0120 09:33:17.939038 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwr8t" event={"ID":"ab1b494b-c290-4e91-80f0-06ab26e0236e","Type":"ContainerStarted","Data":"b168d327718f94e386af5330792a611e65e2cd40f5221d7a757331e73947183c"} Jan 20 09:33:19 crc kubenswrapper[4859]: I0120 09:33:19.973819 4859 generic.go:334] "Generic (PLEG): container finished" podID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerID="8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707" exitCode=0 Jan 20 09:33:19 crc kubenswrapper[4859]: I0120 09:33:19.973928 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwr8t" event={"ID":"ab1b494b-c290-4e91-80f0-06ab26e0236e","Type":"ContainerDied","Data":"8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707"} Jan 20 09:33:20 crc kubenswrapper[4859]: I0120 09:33:20.984543 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwr8t" event={"ID":"ab1b494b-c290-4e91-80f0-06ab26e0236e","Type":"ContainerStarted","Data":"8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3"} Jan 20 09:33:21 crc kubenswrapper[4859]: I0120 09:33:21.003060 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zwr8t" podStartSLOduration=2.45464913 podStartE2EDuration="5.003040168s" podCreationTimestamp="2026-01-20 09:33:16 +0000 UTC" firstStartedPulling="2026-01-20 09:33:17.940658382 +0000 UTC m=+872.696674558" lastFinishedPulling="2026-01-20 09:33:20.48904942 +0000 UTC m=+875.245065596" observedRunningTime="2026-01-20 09:33:20.999917833 +0000 UTC m=+875.755934019" watchObservedRunningTime="2026-01-20 09:33:21.003040168 +0000 UTC m=+875.759056344" Jan 20 09:33:23 crc kubenswrapper[4859]: E0120 09:33:23.575815 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.314757 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z8wxx"] Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.317714 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.341072 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8wxx"] Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.345128 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcslz\" (UniqueName: \"kubernetes.io/projected/4d267e81-9340-4a5e-ae38-735fcc5fafaa-kube-api-access-hcslz\") pod \"redhat-operators-z8wxx\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.345272 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-utilities\") pod \"redhat-operators-z8wxx\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.345325 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-catalog-content\") pod \"redhat-operators-z8wxx\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.445971 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcslz\" (UniqueName: \"kubernetes.io/projected/4d267e81-9340-4a5e-ae38-735fcc5fafaa-kube-api-access-hcslz\") pod \"redhat-operators-z8wxx\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.446048 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-utilities\") pod \"redhat-operators-z8wxx\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.446086 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-catalog-content\") pod \"redhat-operators-z8wxx\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.446550 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-catalog-content\") pod \"redhat-operators-z8wxx\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.446880 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-utilities\") pod \"redhat-operators-z8wxx\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.468686 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcslz\" (UniqueName: \"kubernetes.io/projected/4d267e81-9340-4a5e-ae38-735fcc5fafaa-kube-api-access-hcslz\") pod \"redhat-operators-z8wxx\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.660204 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:24 crc kubenswrapper[4859]: I0120 09:33:24.907884 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8wxx"] Jan 20 09:33:25 crc kubenswrapper[4859]: I0120 09:33:25.017246 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8wxx" event={"ID":"4d267e81-9340-4a5e-ae38-735fcc5fafaa","Type":"ContainerStarted","Data":"d08cb4661f925b63e2ab9cf17c688874a94a31554e102f6d547eb7c5642bc7cb"} Jan 20 09:33:26 crc kubenswrapper[4859]: I0120 09:33:26.026939 4859 generic.go:334] "Generic (PLEG): container finished" podID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerID="8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9" exitCode=0 Jan 20 09:33:26 crc kubenswrapper[4859]: I0120 09:33:26.026981 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8wxx" event={"ID":"4d267e81-9340-4a5e-ae38-735fcc5fafaa","Type":"ContainerDied","Data":"8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9"} Jan 20 09:33:26 crc kubenswrapper[4859]: I0120 09:33:26.880319 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:26 crc kubenswrapper[4859]: I0120 09:33:26.880657 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:26 crc kubenswrapper[4859]: I0120 09:33:26.932234 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:27 crc kubenswrapper[4859]: I0120 09:33:27.036479 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8wxx" event={"ID":"4d267e81-9340-4a5e-ae38-735fcc5fafaa","Type":"ContainerStarted","Data":"218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6"} Jan 20 09:33:27 crc kubenswrapper[4859]: I0120 09:33:27.132256 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:28 crc kubenswrapper[4859]: I0120 09:33:28.046947 4859 generic.go:334] "Generic (PLEG): container finished" podID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerID="218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6" exitCode=0 Jan 20 09:33:28 crc kubenswrapper[4859]: I0120 09:33:28.046984 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8wxx" event={"ID":"4d267e81-9340-4a5e-ae38-735fcc5fafaa","Type":"ContainerDied","Data":"218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6"} Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.078926 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zwr8t"] Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.079388 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zwr8t" podUID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerName="registry-server" containerID="cri-o://8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3" gracePeriod=2 Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.422381 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.511826 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-utilities\") pod \"ab1b494b-c290-4e91-80f0-06ab26e0236e\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.511931 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-catalog-content\") pod \"ab1b494b-c290-4e91-80f0-06ab26e0236e\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.511963 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlbbz\" (UniqueName: \"kubernetes.io/projected/ab1b494b-c290-4e91-80f0-06ab26e0236e-kube-api-access-mlbbz\") pod \"ab1b494b-c290-4e91-80f0-06ab26e0236e\" (UID: \"ab1b494b-c290-4e91-80f0-06ab26e0236e\") " Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.512709 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-utilities" (OuterVolumeSpecName: "utilities") pod "ab1b494b-c290-4e91-80f0-06ab26e0236e" (UID: "ab1b494b-c290-4e91-80f0-06ab26e0236e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.521190 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1b494b-c290-4e91-80f0-06ab26e0236e-kube-api-access-mlbbz" (OuterVolumeSpecName: "kube-api-access-mlbbz") pod "ab1b494b-c290-4e91-80f0-06ab26e0236e" (UID: "ab1b494b-c290-4e91-80f0-06ab26e0236e"). InnerVolumeSpecName "kube-api-access-mlbbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.618173 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlbbz\" (UniqueName: \"kubernetes.io/projected/ab1b494b-c290-4e91-80f0-06ab26e0236e-kube-api-access-mlbbz\") on node \"crc\" DevicePath \"\"" Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.618239 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.737573 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab1b494b-c290-4e91-80f0-06ab26e0236e" (UID: "ab1b494b-c290-4e91-80f0-06ab26e0236e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:33:29 crc kubenswrapper[4859]: I0120 09:33:29.821274 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1b494b-c290-4e91-80f0-06ab26e0236e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.066734 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8wxx" event={"ID":"4d267e81-9340-4a5e-ae38-735fcc5fafaa","Type":"ContainerStarted","Data":"a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e"} Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.069219 4859 generic.go:334] "Generic (PLEG): container finished" podID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerID="8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3" exitCode=0 Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.069291 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwr8t" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.069282 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwr8t" event={"ID":"ab1b494b-c290-4e91-80f0-06ab26e0236e","Type":"ContainerDied","Data":"8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3"} Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.069481 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwr8t" event={"ID":"ab1b494b-c290-4e91-80f0-06ab26e0236e","Type":"ContainerDied","Data":"b168d327718f94e386af5330792a611e65e2cd40f5221d7a757331e73947183c"} Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.069529 4859 scope.go:117] "RemoveContainer" containerID="8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.089433 4859 scope.go:117] "RemoveContainer" containerID="8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.110377 4859 scope.go:117] "RemoveContainer" containerID="82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.123055 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z8wxx" podStartSLOduration=3.186826537 podStartE2EDuration="6.123038664s" podCreationTimestamp="2026-01-20 09:33:24 +0000 UTC" firstStartedPulling="2026-01-20 09:33:26.028311077 +0000 UTC m=+880.784327253" lastFinishedPulling="2026-01-20 09:33:28.964523204 +0000 UTC m=+883.720539380" observedRunningTime="2026-01-20 09:33:30.088274274 +0000 UTC m=+884.844290450" watchObservedRunningTime="2026-01-20 09:33:30.123038664 +0000 UTC m=+884.879054840" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.123424 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zwr8t"] Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.133742 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zwr8t"] Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.136711 4859 scope.go:117] "RemoveContainer" containerID="8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3" Jan 20 09:33:30 crc kubenswrapper[4859]: E0120 09:33:30.137115 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3\": container with ID starting with 8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3 not found: ID does not exist" containerID="8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.137167 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3"} err="failed to get container status \"8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3\": rpc error: code = NotFound desc = could not find container \"8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3\": container with ID starting with 8e9c6b0454e3caff6320345f7b1b76b8dd27e28c3170e5e5677373086d330eb3 not found: ID does not exist" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.137187 4859 scope.go:117] "RemoveContainer" containerID="8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707" Jan 20 09:33:30 crc kubenswrapper[4859]: E0120 09:33:30.137416 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707\": container with ID starting with 8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707 not found: ID does not exist" containerID="8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.137442 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707"} err="failed to get container status \"8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707\": rpc error: code = NotFound desc = could not find container \"8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707\": container with ID starting with 8075ce616976d445ddec7b0b349649180564a435a5ec282c7b5e3f76fa971707 not found: ID does not exist" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.137726 4859 scope.go:117] "RemoveContainer" containerID="82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8" Jan 20 09:33:30 crc kubenswrapper[4859]: E0120 09:33:30.138275 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8\": container with ID starting with 82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8 not found: ID does not exist" containerID="82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8" Jan 20 09:33:30 crc kubenswrapper[4859]: I0120 09:33:30.138317 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8"} err="failed to get container status \"82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8\": rpc error: code = NotFound desc = could not find container \"82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8\": container with ID starting with 82b7f08112d43a63b2b1faa41e4762b227aadfc1e3f55829310b3c36568514f8 not found: ID does not exist" Jan 20 09:33:31 crc kubenswrapper[4859]: I0120 09:33:31.583774 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1b494b-c290-4e91-80f0-06ab26e0236e" path="/var/lib/kubelet/pods/ab1b494b-c290-4e91-80f0-06ab26e0236e/volumes" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.618149 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m7nl5"] Jan 20 09:33:34 crc kubenswrapper[4859]: E0120 09:33:34.618863 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerName="extract-content" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.618884 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerName="extract-content" Jan 20 09:33:34 crc kubenswrapper[4859]: E0120 09:33:34.618910 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerName="extract-utilities" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.618921 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerName="extract-utilities" Jan 20 09:33:34 crc kubenswrapper[4859]: E0120 09:33:34.618938 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerName="registry-server" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.618947 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerName="registry-server" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.619125 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1b494b-c290-4e91-80f0-06ab26e0236e" containerName="registry-server" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.620410 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.640214 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m7nl5"] Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.661416 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.661477 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.693506 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-utilities\") pod \"certified-operators-m7nl5\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.693694 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbsv\" (UniqueName: \"kubernetes.io/projected/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-kube-api-access-9bbsv\") pod \"certified-operators-m7nl5\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.693741 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-catalog-content\") pod \"certified-operators-m7nl5\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.795356 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbsv\" (UniqueName: \"kubernetes.io/projected/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-kube-api-access-9bbsv\") pod \"certified-operators-m7nl5\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.795410 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-catalog-content\") pod \"certified-operators-m7nl5\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.795470 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-utilities\") pod \"certified-operators-m7nl5\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.796150 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-utilities\") pod \"certified-operators-m7nl5\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.796512 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-catalog-content\") pod \"certified-operators-m7nl5\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.820655 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbsv\" (UniqueName: \"kubernetes.io/projected/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-kube-api-access-9bbsv\") pod \"certified-operators-m7nl5\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:34 crc kubenswrapper[4859]: I0120 09:33:34.952611 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:35 crc kubenswrapper[4859]: I0120 09:33:35.410075 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m7nl5"] Jan 20 09:33:35 crc kubenswrapper[4859]: E0120 09:33:35.580144 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:33:35 crc kubenswrapper[4859]: I0120 09:33:35.707045 4859 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z8wxx" podUID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerName="registry-server" probeResult="failure" output=< Jan 20 09:33:35 crc kubenswrapper[4859]: timeout: failed to connect service ":50051" within 1s Jan 20 09:33:35 crc kubenswrapper[4859]: > Jan 20 09:33:36 crc kubenswrapper[4859]: I0120 09:33:36.122358 4859 generic.go:334] "Generic (PLEG): container finished" podID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerID="e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97" exitCode=0 Jan 20 09:33:36 crc kubenswrapper[4859]: I0120 09:33:36.122409 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nl5" event={"ID":"228e9e0b-22af-4e94-87d4-e1a31b78bc6d","Type":"ContainerDied","Data":"e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97"} Jan 20 09:33:36 crc kubenswrapper[4859]: I0120 09:33:36.122433 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nl5" event={"ID":"228e9e0b-22af-4e94-87d4-e1a31b78bc6d","Type":"ContainerStarted","Data":"8ca72eadd79a53057c05b2371840d8d9702b52443fb911204db2731b52fbb096"} Jan 20 09:33:38 crc kubenswrapper[4859]: I0120 09:33:38.142720 4859 generic.go:334] "Generic (PLEG): container finished" podID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerID="d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3" exitCode=0 Jan 20 09:33:38 crc kubenswrapper[4859]: I0120 09:33:38.142823 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nl5" event={"ID":"228e9e0b-22af-4e94-87d4-e1a31b78bc6d","Type":"ContainerDied","Data":"d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3"} Jan 20 09:33:40 crc kubenswrapper[4859]: I0120 09:33:40.048570 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:33:40 crc kubenswrapper[4859]: I0120 09:33:40.049351 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:33:40 crc kubenswrapper[4859]: I0120 09:33:40.161723 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nl5" event={"ID":"228e9e0b-22af-4e94-87d4-e1a31b78bc6d","Type":"ContainerStarted","Data":"ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be"} Jan 20 09:33:40 crc kubenswrapper[4859]: I0120 09:33:40.188778 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m7nl5" podStartSLOduration=3.220044624 podStartE2EDuration="6.188758918s" podCreationTimestamp="2026-01-20 09:33:34 +0000 UTC" firstStartedPulling="2026-01-20 09:33:36.124973958 +0000 UTC m=+890.880990134" lastFinishedPulling="2026-01-20 09:33:39.093688222 +0000 UTC m=+893.849704428" observedRunningTime="2026-01-20 09:33:40.186044143 +0000 UTC m=+894.942060409" watchObservedRunningTime="2026-01-20 09:33:40.188758918 +0000 UTC m=+894.944775104" Jan 20 09:33:44 crc kubenswrapper[4859]: I0120 09:33:44.732900 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:44 crc kubenswrapper[4859]: I0120 09:33:44.799233 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:44 crc kubenswrapper[4859]: I0120 09:33:44.953160 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:44 crc kubenswrapper[4859]: I0120 09:33:44.953472 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:44 crc kubenswrapper[4859]: I0120 09:33:44.961723 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8wxx"] Jan 20 09:33:44 crc kubenswrapper[4859]: I0120 09:33:44.995116 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:45 crc kubenswrapper[4859]: I0120 09:33:45.264927 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.207905 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z8wxx" podUID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerName="registry-server" containerID="cri-o://a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e" gracePeriod=2 Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.731824 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.781491 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-catalog-content\") pod \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.781689 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcslz\" (UniqueName: \"kubernetes.io/projected/4d267e81-9340-4a5e-ae38-735fcc5fafaa-kube-api-access-hcslz\") pod \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.781754 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-utilities\") pod \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\" (UID: \"4d267e81-9340-4a5e-ae38-735fcc5fafaa\") " Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.783495 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-utilities" (OuterVolumeSpecName: "utilities") pod "4d267e81-9340-4a5e-ae38-735fcc5fafaa" (UID: "4d267e81-9340-4a5e-ae38-735fcc5fafaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.796510 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d267e81-9340-4a5e-ae38-735fcc5fafaa-kube-api-access-hcslz" (OuterVolumeSpecName: "kube-api-access-hcslz") pod "4d267e81-9340-4a5e-ae38-735fcc5fafaa" (UID: "4d267e81-9340-4a5e-ae38-735fcc5fafaa"). InnerVolumeSpecName "kube-api-access-hcslz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.883837 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcslz\" (UniqueName: \"kubernetes.io/projected/4d267e81-9340-4a5e-ae38-735fcc5fafaa-kube-api-access-hcslz\") on node \"crc\" DevicePath \"\"" Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.883880 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.912586 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d267e81-9340-4a5e-ae38-735fcc5fafaa" (UID: "4d267e81-9340-4a5e-ae38-735fcc5fafaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:33:46 crc kubenswrapper[4859]: I0120 09:33:46.985392 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d267e81-9340-4a5e-ae38-735fcc5fafaa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.215437 4859 generic.go:334] "Generic (PLEG): container finished" podID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerID="a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e" exitCode=0 Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.215535 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8wxx" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.215553 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8wxx" event={"ID":"4d267e81-9340-4a5e-ae38-735fcc5fafaa","Type":"ContainerDied","Data":"a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e"} Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.215613 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8wxx" event={"ID":"4d267e81-9340-4a5e-ae38-735fcc5fafaa","Type":"ContainerDied","Data":"d08cb4661f925b63e2ab9cf17c688874a94a31554e102f6d547eb7c5642bc7cb"} Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.215646 4859 scope.go:117] "RemoveContainer" containerID="a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.245311 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8wxx"] Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.245446 4859 scope.go:117] "RemoveContainer" containerID="218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.250767 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z8wxx"] Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.273544 4859 scope.go:117] "RemoveContainer" containerID="8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.291489 4859 scope.go:117] "RemoveContainer" containerID="a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e" Jan 20 09:33:47 crc kubenswrapper[4859]: E0120 09:33:47.292114 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e\": container with ID starting with a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e not found: ID does not exist" containerID="a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.292156 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e"} err="failed to get container status \"a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e\": rpc error: code = NotFound desc = could not find container \"a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e\": container with ID starting with a9935fe127d806351350d36af4f5ec54cc6ebc49f16a486c703055b54daac91e not found: ID does not exist" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.292182 4859 scope.go:117] "RemoveContainer" containerID="218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6" Jan 20 09:33:47 crc kubenswrapper[4859]: E0120 09:33:47.292573 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6\": container with ID starting with 218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6 not found: ID does not exist" containerID="218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.292596 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6"} err="failed to get container status \"218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6\": rpc error: code = NotFound desc = could not find container \"218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6\": container with ID starting with 218b127dcae34f2ee54ed6c85c6773773057d0244c0e78ef0426bca2a6502ec6 not found: ID does not exist" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.292610 4859 scope.go:117] "RemoveContainer" containerID="8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9" Jan 20 09:33:47 crc kubenswrapper[4859]: E0120 09:33:47.293141 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9\": container with ID starting with 8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9 not found: ID does not exist" containerID="8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.293166 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9"} err="failed to get container status \"8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9\": rpc error: code = NotFound desc = could not find container \"8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9\": container with ID starting with 8d17a3d48989fb2b5c26fcc1e8f1ce74f237c6079a4e73511b56cbf8ccf5d3e9 not found: ID does not exist" Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.361768 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m7nl5"] Jan 20 09:33:47 crc kubenswrapper[4859]: I0120 09:33:47.584162 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" path="/var/lib/kubelet/pods/4d267e81-9340-4a5e-ae38-735fcc5fafaa/volumes" Jan 20 09:33:48 crc kubenswrapper[4859]: I0120 09:33:48.224466 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m7nl5" podUID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerName="registry-server" containerID="cri-o://ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be" gracePeriod=2 Jan 20 09:33:48 crc kubenswrapper[4859]: E0120 09:33:48.575042 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.122055 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.214489 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-catalog-content\") pod \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.214594 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-utilities\") pod \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.214702 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bbsv\" (UniqueName: \"kubernetes.io/projected/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-kube-api-access-9bbsv\") pod \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\" (UID: \"228e9e0b-22af-4e94-87d4-e1a31b78bc6d\") " Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.216457 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-utilities" (OuterVolumeSpecName: "utilities") pod "228e9e0b-22af-4e94-87d4-e1a31b78bc6d" (UID: "228e9e0b-22af-4e94-87d4-e1a31b78bc6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.226975 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-kube-api-access-9bbsv" (OuterVolumeSpecName: "kube-api-access-9bbsv") pod "228e9e0b-22af-4e94-87d4-e1a31b78bc6d" (UID: "228e9e0b-22af-4e94-87d4-e1a31b78bc6d"). InnerVolumeSpecName "kube-api-access-9bbsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.232291 4859 generic.go:334] "Generic (PLEG): container finished" podID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerID="ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be" exitCode=0 Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.232336 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nl5" event={"ID":"228e9e0b-22af-4e94-87d4-e1a31b78bc6d","Type":"ContainerDied","Data":"ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be"} Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.232362 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7nl5" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.232382 4859 scope.go:117] "RemoveContainer" containerID="ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.232368 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nl5" event={"ID":"228e9e0b-22af-4e94-87d4-e1a31b78bc6d","Type":"ContainerDied","Data":"8ca72eadd79a53057c05b2371840d8d9702b52443fb911204db2731b52fbb096"} Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.264710 4859 scope.go:117] "RemoveContainer" containerID="d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.265193 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "228e9e0b-22af-4e94-87d4-e1a31b78bc6d" (UID: "228e9e0b-22af-4e94-87d4-e1a31b78bc6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.290935 4859 scope.go:117] "RemoveContainer" containerID="e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.307876 4859 scope.go:117] "RemoveContainer" containerID="ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be" Jan 20 09:33:49 crc kubenswrapper[4859]: E0120 09:33:49.308424 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be\": container with ID starting with ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be not found: ID does not exist" containerID="ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.308457 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be"} err="failed to get container status \"ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be\": rpc error: code = NotFound desc = could not find container \"ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be\": container with ID starting with ad04294912cc0275e1dec234883f0d31d57584c839e75a4bec02f714a35b84be not found: ID does not exist" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.308483 4859 scope.go:117] "RemoveContainer" containerID="d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3" Jan 20 09:33:49 crc kubenswrapper[4859]: E0120 09:33:49.308831 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3\": container with ID starting with d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3 not found: ID does not exist" containerID="d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.308854 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3"} err="failed to get container status \"d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3\": rpc error: code = NotFound desc = could not find container \"d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3\": container with ID starting with d2905e7a459beac7b03eca43bf99fbd5dc2027e636d1b550f18d6c3e6c0f47f3 not found: ID does not exist" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.308869 4859 scope.go:117] "RemoveContainer" containerID="e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97" Jan 20 09:33:49 crc kubenswrapper[4859]: E0120 09:33:49.309252 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97\": container with ID starting with e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97 not found: ID does not exist" containerID="e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.309309 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97"} err="failed to get container status \"e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97\": rpc error: code = NotFound desc = could not find container \"e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97\": container with ID starting with e87c932e2ad3c70dcd3499214fae0f2a08ed0854bd35e2b2857d4ee331033a97 not found: ID does not exist" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.317014 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.317043 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bbsv\" (UniqueName: \"kubernetes.io/projected/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-kube-api-access-9bbsv\") on node \"crc\" DevicePath \"\"" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.317054 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228e9e0b-22af-4e94-87d4-e1a31b78bc6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.570816 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m7nl5"] Jan 20 09:33:49 crc kubenswrapper[4859]: I0120 09:33:49.585043 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m7nl5"] Jan 20 09:33:51 crc kubenswrapper[4859]: I0120 09:33:51.584014 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" path="/var/lib/kubelet/pods/228e9e0b-22af-4e94-87d4-e1a31b78bc6d/volumes" Jan 20 09:34:02 crc kubenswrapper[4859]: E0120 09:34:02.579805 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:34:10 crc kubenswrapper[4859]: I0120 09:34:10.047886 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:34:10 crc kubenswrapper[4859]: I0120 09:34:10.048528 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:34:10 crc kubenswrapper[4859]: I0120 09:34:10.048618 4859 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:34:10 crc kubenswrapper[4859]: I0120 09:34:10.049950 4859 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"696c0fe563bd554bf48f3eac26708b2138633993fdad488085e64f3c0dee1432"} pod="openshift-machine-config-operator/machine-config-daemon-knvgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:34:10 crc kubenswrapper[4859]: I0120 09:34:10.050172 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" containerID="cri-o://696c0fe563bd554bf48f3eac26708b2138633993fdad488085e64f3c0dee1432" gracePeriod=600 Jan 20 09:34:10 crc kubenswrapper[4859]: E0120 09:34:10.154993 4859 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddab032ef_85ae_456c_b5ea_750bc1c32483.slice/crio-696c0fe563bd554bf48f3eac26708b2138633993fdad488085e64f3c0dee1432.scope\": RecentStats: unable to find data in memory cache]" Jan 20 09:34:10 crc kubenswrapper[4859]: I0120 09:34:10.390874 4859 generic.go:334] "Generic (PLEG): container finished" podID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerID="696c0fe563bd554bf48f3eac26708b2138633993fdad488085e64f3c0dee1432" exitCode=0 Jan 20 09:34:10 crc kubenswrapper[4859]: I0120 09:34:10.391073 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerDied","Data":"696c0fe563bd554bf48f3eac26708b2138633993fdad488085e64f3c0dee1432"} Jan 20 09:34:10 crc kubenswrapper[4859]: I0120 09:34:10.391221 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerStarted","Data":"4e8f8c89419aae5768017e2c91db37978f05703950a907b8cd393f4c5926b1af"} Jan 20 09:34:10 crc kubenswrapper[4859]: I0120 09:34:10.391240 4859 scope.go:117] "RemoveContainer" containerID="2d0d88de4d76d095e717392508318300a2f04b7c83aa49cc8a2f84ff71267e9f" Jan 20 09:34:16 crc kubenswrapper[4859]: I0120 09:34:16.577709 4859 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 09:34:16 crc kubenswrapper[4859]: E0120 09:34:16.620164 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:34:16 crc kubenswrapper[4859]: E0120 09:34:16.620495 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwpzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-pr2xt_service-telemetry(7ab9b124-5d3b-4d56-b1c8-ab68152a2e39): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:34:16 crc kubenswrapper[4859]: E0120 09:34:16.621804 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:34:31 crc kubenswrapper[4859]: E0120 09:34:31.575954 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:34:43 crc kubenswrapper[4859]: E0120 09:34:43.576180 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:34:55 crc kubenswrapper[4859]: E0120 09:34:55.579743 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:35:07 crc kubenswrapper[4859]: E0120 09:35:07.577463 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:35:18 crc kubenswrapper[4859]: E0120 09:35:18.575500 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:35:30 crc kubenswrapper[4859]: E0120 09:35:30.577584 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:35:42 crc kubenswrapper[4859]: E0120 09:35:42.577893 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:35:53 crc kubenswrapper[4859]: E0120 09:35:53.614027 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:36:04 crc kubenswrapper[4859]: E0120 09:36:04.575909 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:36:10 crc kubenswrapper[4859]: I0120 09:36:10.049214 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:36:10 crc kubenswrapper[4859]: I0120 09:36:10.049948 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:36:18 crc kubenswrapper[4859]: E0120 09:36:18.576990 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:36:31 crc kubenswrapper[4859]: E0120 09:36:31.575843 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:36:40 crc kubenswrapper[4859]: I0120 09:36:40.048999 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:36:40 crc kubenswrapper[4859]: I0120 09:36:40.049473 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:36:45 crc kubenswrapper[4859]: E0120 09:36:45.579355 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:37:00 crc kubenswrapper[4859]: E0120 09:37:00.624651 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:37:00 crc kubenswrapper[4859]: E0120 09:37:00.625658 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwpzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-pr2xt_service-telemetry(7ab9b124-5d3b-4d56-b1c8-ab68152a2e39): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:37:00 crc kubenswrapper[4859]: E0120 09:37:00.626931 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:37:10 crc kubenswrapper[4859]: I0120 09:37:10.048770 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:37:10 crc kubenswrapper[4859]: I0120 09:37:10.049505 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:37:10 crc kubenswrapper[4859]: I0120 09:37:10.049575 4859 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:37:10 crc kubenswrapper[4859]: I0120 09:37:10.050121 4859 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e8f8c89419aae5768017e2c91db37978f05703950a907b8cd393f4c5926b1af"} pod="openshift-machine-config-operator/machine-config-daemon-knvgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:37:10 crc kubenswrapper[4859]: I0120 09:37:10.050172 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" containerID="cri-o://4e8f8c89419aae5768017e2c91db37978f05703950a907b8cd393f4c5926b1af" gracePeriod=600 Jan 20 09:37:10 crc kubenswrapper[4859]: I0120 09:37:10.709557 4859 generic.go:334] "Generic (PLEG): container finished" podID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerID="4e8f8c89419aae5768017e2c91db37978f05703950a907b8cd393f4c5926b1af" exitCode=0 Jan 20 09:37:10 crc kubenswrapper[4859]: I0120 09:37:10.709667 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerDied","Data":"4e8f8c89419aae5768017e2c91db37978f05703950a907b8cd393f4c5926b1af"} Jan 20 09:37:10 crc kubenswrapper[4859]: I0120 09:37:10.710305 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerStarted","Data":"21dc8d55aa244c237b0454eb37dd46da6d92ffe88ec96bd854334eb300a81fb1"} Jan 20 09:37:10 crc kubenswrapper[4859]: I0120 09:37:10.710333 4859 scope.go:117] "RemoveContainer" containerID="696c0fe563bd554bf48f3eac26708b2138633993fdad488085e64f3c0dee1432" Jan 20 09:37:13 crc kubenswrapper[4859]: E0120 09:37:13.575438 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.625449 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-fzgkn"] Jan 20 09:37:13 crc kubenswrapper[4859]: E0120 09:37:13.625705 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerName="extract-utilities" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.625724 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerName="extract-utilities" Jan 20 09:37:13 crc kubenswrapper[4859]: E0120 09:37:13.625732 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerName="extract-content" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.625739 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerName="extract-content" Jan 20 09:37:13 crc kubenswrapper[4859]: E0120 09:37:13.625751 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerName="extract-content" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.625757 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerName="extract-content" Jan 20 09:37:13 crc kubenswrapper[4859]: E0120 09:37:13.625770 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerName="registry-server" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.625776 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerName="registry-server" Jan 20 09:37:13 crc kubenswrapper[4859]: E0120 09:37:13.625811 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerName="registry-server" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.625820 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerName="registry-server" Jan 20 09:37:13 crc kubenswrapper[4859]: E0120 09:37:13.625839 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerName="extract-utilities" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.625847 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerName="extract-utilities" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.625949 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="228e9e0b-22af-4e94-87d4-e1a31b78bc6d" containerName="registry-server" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.625969 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d267e81-9340-4a5e-ae38-735fcc5fafaa" containerName="registry-server" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.626445 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-fzgkn" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.632099 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-fzgkn"] Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.807519 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv29s\" (UniqueName: \"kubernetes.io/projected/778d674c-d99e-4c83-9781-9a772e7a7c2a-kube-api-access-gv29s\") pod \"infrawatch-operators-fzgkn\" (UID: \"778d674c-d99e-4c83-9781-9a772e7a7c2a\") " pod="service-telemetry/infrawatch-operators-fzgkn" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.908644 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv29s\" (UniqueName: \"kubernetes.io/projected/778d674c-d99e-4c83-9781-9a772e7a7c2a-kube-api-access-gv29s\") pod \"infrawatch-operators-fzgkn\" (UID: \"778d674c-d99e-4c83-9781-9a772e7a7c2a\") " pod="service-telemetry/infrawatch-operators-fzgkn" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.938602 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv29s\" (UniqueName: \"kubernetes.io/projected/778d674c-d99e-4c83-9781-9a772e7a7c2a-kube-api-access-gv29s\") pod \"infrawatch-operators-fzgkn\" (UID: \"778d674c-d99e-4c83-9781-9a772e7a7c2a\") " pod="service-telemetry/infrawatch-operators-fzgkn" Jan 20 09:37:13 crc kubenswrapper[4859]: I0120 09:37:13.946282 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-fzgkn" Jan 20 09:37:14 crc kubenswrapper[4859]: I0120 09:37:14.159118 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-fzgkn"] Jan 20 09:37:14 crc kubenswrapper[4859]: E0120 09:37:14.188897 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:37:14 crc kubenswrapper[4859]: E0120 09:37:14.189146 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gv29s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fzgkn_service-telemetry(778d674c-d99e-4c83-9781-9a772e7a7c2a): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:37:14 crc kubenswrapper[4859]: E0120 09:37:14.191324 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:37:14 crc kubenswrapper[4859]: I0120 09:37:14.746390 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-fzgkn" event={"ID":"778d674c-d99e-4c83-9781-9a772e7a7c2a","Type":"ContainerStarted","Data":"c9e2195d9baaaff40cf2cbec19a2079afb6175e7c8ddf71bb41d5e1362967af6"} Jan 20 09:37:14 crc kubenswrapper[4859]: E0120 09:37:14.748338 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:37:15 crc kubenswrapper[4859]: E0120 09:37:15.756005 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:37:28 crc kubenswrapper[4859]: E0120 09:37:28.576456 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:37:29 crc kubenswrapper[4859]: E0120 09:37:29.627762 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:37:29 crc kubenswrapper[4859]: E0120 09:37:29.628395 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gv29s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fzgkn_service-telemetry(778d674c-d99e-4c83-9781-9a772e7a7c2a): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:37:29 crc kubenswrapper[4859]: E0120 09:37:29.630099 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:37:40 crc kubenswrapper[4859]: E0120 09:37:40.578049 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:37:43 crc kubenswrapper[4859]: E0120 09:37:43.576525 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:37:52 crc kubenswrapper[4859]: E0120 09:37:52.650034 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:37:52 crc kubenswrapper[4859]: E0120 09:37:52.650696 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gv29s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fzgkn_service-telemetry(778d674c-d99e-4c83-9781-9a772e7a7c2a): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:37:52 crc kubenswrapper[4859]: E0120 09:37:52.653124 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:37:57 crc kubenswrapper[4859]: E0120 09:37:57.576148 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:38:03 crc kubenswrapper[4859]: E0120 09:38:03.579863 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:38:09 crc kubenswrapper[4859]: E0120 09:38:09.576961 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:38:17 crc kubenswrapper[4859]: E0120 09:38:17.576706 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:38:24 crc kubenswrapper[4859]: E0120 09:38:24.577071 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:38:30 crc kubenswrapper[4859]: E0120 09:38:30.576264 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:38:37 crc kubenswrapper[4859]: E0120 09:38:37.575285 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:38:45 crc kubenswrapper[4859]: E0120 09:38:45.619176 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:38:45 crc kubenswrapper[4859]: E0120 09:38:45.620445 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gv29s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fzgkn_service-telemetry(778d674c-d99e-4c83-9781-9a772e7a7c2a): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:38:45 crc kubenswrapper[4859]: E0120 09:38:45.621675 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:38:51 crc kubenswrapper[4859]: E0120 09:38:51.575732 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:39:00 crc kubenswrapper[4859]: E0120 09:39:00.576607 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:39:02 crc kubenswrapper[4859]: E0120 09:39:02.574941 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:39:10 crc kubenswrapper[4859]: I0120 09:39:10.048382 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:39:10 crc kubenswrapper[4859]: I0120 09:39:10.049027 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:39:14 crc kubenswrapper[4859]: E0120 09:39:14.575232 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:39:17 crc kubenswrapper[4859]: E0120 09:39:17.577722 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:39:28 crc kubenswrapper[4859]: E0120 09:39:28.576667 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:39:28 crc kubenswrapper[4859]: E0120 09:39:28.576800 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:39:40 crc kubenswrapper[4859]: I0120 09:39:40.048466 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:39:40 crc kubenswrapper[4859]: I0120 09:39:40.049092 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:39:42 crc kubenswrapper[4859]: E0120 09:39:42.576560 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:39:43 crc kubenswrapper[4859]: E0120 09:39:43.575473 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:39:53 crc kubenswrapper[4859]: E0120 09:39:53.577759 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:39:57 crc kubenswrapper[4859]: E0120 09:39:57.576570 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:40:04 crc kubenswrapper[4859]: E0120 09:40:04.575183 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:40:09 crc kubenswrapper[4859]: E0120 09:40:09.577219 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:40:10 crc kubenswrapper[4859]: I0120 09:40:10.048373 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:40:10 crc kubenswrapper[4859]: I0120 09:40:10.048462 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:40:10 crc kubenswrapper[4859]: I0120 09:40:10.048528 4859 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:40:10 crc kubenswrapper[4859]: I0120 09:40:10.049418 4859 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21dc8d55aa244c237b0454eb37dd46da6d92ffe88ec96bd854334eb300a81fb1"} pod="openshift-machine-config-operator/machine-config-daemon-knvgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:40:10 crc kubenswrapper[4859]: I0120 09:40:10.049520 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" containerID="cri-o://21dc8d55aa244c237b0454eb37dd46da6d92ffe88ec96bd854334eb300a81fb1" gracePeriod=600 Jan 20 09:40:11 crc kubenswrapper[4859]: I0120 09:40:11.077646 4859 generic.go:334] "Generic (PLEG): container finished" podID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerID="21dc8d55aa244c237b0454eb37dd46da6d92ffe88ec96bd854334eb300a81fb1" exitCode=0 Jan 20 09:40:11 crc kubenswrapper[4859]: I0120 09:40:11.077755 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerDied","Data":"21dc8d55aa244c237b0454eb37dd46da6d92ffe88ec96bd854334eb300a81fb1"} Jan 20 09:40:11 crc kubenswrapper[4859]: I0120 09:40:11.078324 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerStarted","Data":"8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3"} Jan 20 09:40:11 crc kubenswrapper[4859]: I0120 09:40:11.078359 4859 scope.go:117] "RemoveContainer" containerID="4e8f8c89419aae5768017e2c91db37978f05703950a907b8cd393f4c5926b1af" Jan 20 09:40:15 crc kubenswrapper[4859]: I0120 09:40:15.578957 4859 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 09:40:15 crc kubenswrapper[4859]: E0120 09:40:15.620876 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:40:15 crc kubenswrapper[4859]: E0120 09:40:15.621071 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gv29s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fzgkn_service-telemetry(778d674c-d99e-4c83-9781-9a772e7a7c2a): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:40:15 crc kubenswrapper[4859]: E0120 09:40:15.622451 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:40:20 crc kubenswrapper[4859]: E0120 09:40:20.577407 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:40:29 crc kubenswrapper[4859]: E0120 09:40:29.578192 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:40:33 crc kubenswrapper[4859]: E0120 09:40:33.576524 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:40:43 crc kubenswrapper[4859]: E0120 09:40:43.576927 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:40:45 crc kubenswrapper[4859]: E0120 09:40:45.578827 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:40:54 crc kubenswrapper[4859]: E0120 09:40:54.576159 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:40:58 crc kubenswrapper[4859]: E0120 09:40:58.578169 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:41:06 crc kubenswrapper[4859]: E0120 09:41:06.575263 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:41:13 crc kubenswrapper[4859]: E0120 09:41:13.579515 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:41:21 crc kubenswrapper[4859]: E0120 09:41:21.577293 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:41:28 crc kubenswrapper[4859]: E0120 09:41:28.576809 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:41:35 crc kubenswrapper[4859]: E0120 09:41:35.594387 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:41:41 crc kubenswrapper[4859]: E0120 09:41:41.575693 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:41:49 crc kubenswrapper[4859]: E0120 09:41:49.577700 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:41:56 crc kubenswrapper[4859]: E0120 09:41:56.575735 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:42:01 crc kubenswrapper[4859]: E0120 09:42:01.576769 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:42:08 crc kubenswrapper[4859]: E0120 09:42:08.632383 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:42:08 crc kubenswrapper[4859]: E0120 09:42:08.633695 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwpzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-pr2xt_service-telemetry(7ab9b124-5d3b-4d56-b1c8-ab68152a2e39): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:42:08 crc kubenswrapper[4859]: E0120 09:42:08.635518 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:42:10 crc kubenswrapper[4859]: I0120 09:42:10.048528 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:42:10 crc kubenswrapper[4859]: I0120 09:42:10.048923 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:42:14 crc kubenswrapper[4859]: E0120 09:42:14.575694 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:42:23 crc kubenswrapper[4859]: E0120 09:42:23.575930 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.384909 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jjhxj/must-gather-9nq8b"] Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.386455 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jjhxj/must-gather-9nq8b" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.388622 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jjhxj"/"kube-root-ca.crt" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.388915 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jjhxj"/"openshift-service-ca.crt" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.400472 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-must-gather-output\") pod \"must-gather-9nq8b\" (UID: \"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96\") " pod="openshift-must-gather-jjhxj/must-gather-9nq8b" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.400556 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmfg5\" (UniqueName: \"kubernetes.io/projected/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-kube-api-access-kmfg5\") pod \"must-gather-9nq8b\" (UID: \"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96\") " pod="openshift-must-gather-jjhxj/must-gather-9nq8b" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.403371 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jjhxj/must-gather-9nq8b"] Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.501993 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmfg5\" (UniqueName: \"kubernetes.io/projected/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-kube-api-access-kmfg5\") pod \"must-gather-9nq8b\" (UID: \"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96\") " pod="openshift-must-gather-jjhxj/must-gather-9nq8b" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.502066 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-must-gather-output\") pod \"must-gather-9nq8b\" (UID: \"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96\") " pod="openshift-must-gather-jjhxj/must-gather-9nq8b" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.502407 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-must-gather-output\") pod \"must-gather-9nq8b\" (UID: \"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96\") " pod="openshift-must-gather-jjhxj/must-gather-9nq8b" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.524127 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmfg5\" (UniqueName: \"kubernetes.io/projected/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-kube-api-access-kmfg5\") pod \"must-gather-9nq8b\" (UID: \"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96\") " pod="openshift-must-gather-jjhxj/must-gather-9nq8b" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.718039 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jjhxj/must-gather-9nq8b" Jan 20 09:42:25 crc kubenswrapper[4859]: I0120 09:42:25.929002 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jjhxj/must-gather-9nq8b"] Jan 20 09:42:26 crc kubenswrapper[4859]: I0120 09:42:26.082969 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jjhxj/must-gather-9nq8b" event={"ID":"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96","Type":"ContainerStarted","Data":"c738fbf31da8e69765733214e3c72a930f262deafa8373bb1551ff8d6737a075"} Jan 20 09:42:27 crc kubenswrapper[4859]: E0120 09:42:27.576627 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:42:33 crc kubenswrapper[4859]: I0120 09:42:33.137738 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jjhxj/must-gather-9nq8b" event={"ID":"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96","Type":"ContainerStarted","Data":"78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e"} Jan 20 09:42:33 crc kubenswrapper[4859]: I0120 09:42:33.138136 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jjhxj/must-gather-9nq8b" event={"ID":"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96","Type":"ContainerStarted","Data":"d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e"} Jan 20 09:42:33 crc kubenswrapper[4859]: I0120 09:42:33.164712 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jjhxj/must-gather-9nq8b" podStartSLOduration=1.492758184 podStartE2EDuration="8.164688232s" podCreationTimestamp="2026-01-20 09:42:25 +0000 UTC" firstStartedPulling="2026-01-20 09:42:25.943903274 +0000 UTC m=+1420.699919440" lastFinishedPulling="2026-01-20 09:42:32.615833312 +0000 UTC m=+1427.371849488" observedRunningTime="2026-01-20 09:42:33.155290302 +0000 UTC m=+1427.911306518" watchObservedRunningTime="2026-01-20 09:42:33.164688232 +0000 UTC m=+1427.920704418" Jan 20 09:42:38 crc kubenswrapper[4859]: E0120 09:42:38.576048 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:42:39 crc kubenswrapper[4859]: E0120 09:42:39.575579 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:42:40 crc kubenswrapper[4859]: I0120 09:42:40.048822 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:42:40 crc kubenswrapper[4859]: I0120 09:42:40.048914 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:42:51 crc kubenswrapper[4859]: E0120 09:42:51.576569 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:42:54 crc kubenswrapper[4859]: E0120 09:42:54.576659 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:43:06 crc kubenswrapper[4859]: E0120 09:43:06.575891 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:43:08 crc kubenswrapper[4859]: E0120 09:43:08.627174 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:43:08 crc kubenswrapper[4859]: E0120 09:43:08.627714 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gv29s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-fzgkn_service-telemetry(778d674c-d99e-4c83-9781-9a772e7a7c2a): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:43:08 crc kubenswrapper[4859]: E0120 09:43:08.629401 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:43:10 crc kubenswrapper[4859]: I0120 09:43:10.048252 4859 patch_prober.go:28] interesting pod/machine-config-daemon-knvgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:43:10 crc kubenswrapper[4859]: I0120 09:43:10.048322 4859 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:43:10 crc kubenswrapper[4859]: I0120 09:43:10.048381 4859 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" Jan 20 09:43:10 crc kubenswrapper[4859]: I0120 09:43:10.049052 4859 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3"} pod="openshift-machine-config-operator/machine-config-daemon-knvgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:43:10 crc kubenswrapper[4859]: I0120 09:43:10.049121 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerName="machine-config-daemon" containerID="cri-o://8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" gracePeriod=600 Jan 20 09:43:10 crc kubenswrapper[4859]: I0120 09:43:10.390419 4859 generic.go:334] "Generic (PLEG): container finished" podID="dab032ef-85ae-456c-b5ea-750bc1c32483" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" exitCode=0 Jan 20 09:43:10 crc kubenswrapper[4859]: I0120 09:43:10.390452 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" event={"ID":"dab032ef-85ae-456c-b5ea-750bc1c32483","Type":"ContainerDied","Data":"8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3"} Jan 20 09:43:10 crc kubenswrapper[4859]: I0120 09:43:10.391058 4859 scope.go:117] "RemoveContainer" containerID="21dc8d55aa244c237b0454eb37dd46da6d92ffe88ec96bd854334eb300a81fb1" Jan 20 09:43:10 crc kubenswrapper[4859]: E0120 09:43:10.680647 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:43:11 crc kubenswrapper[4859]: I0120 09:43:11.404383 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:43:11 crc kubenswrapper[4859]: E0120 09:43:11.404627 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:43:16 crc kubenswrapper[4859]: I0120 09:43:16.200317 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jn6p8_1c7c602a-e7f3-42de-a0ab-38e317f8b4ed/control-plane-machine-set-operator/0.log" Jan 20 09:43:16 crc kubenswrapper[4859]: I0120 09:43:16.301410 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-45s4x_55e6a858-5ae7-4d3d-a454-227bf8b52195/kube-rbac-proxy/0.log" Jan 20 09:43:16 crc kubenswrapper[4859]: I0120 09:43:16.389290 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-45s4x_55e6a858-5ae7-4d3d-a454-227bf8b52195/machine-api-operator/0.log" Jan 20 09:43:17 crc kubenswrapper[4859]: E0120 09:43:17.575354 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:43:19 crc kubenswrapper[4859]: E0120 09:43:19.575418 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:43:25 crc kubenswrapper[4859]: I0120 09:43:25.579408 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:43:25 crc kubenswrapper[4859]: E0120 09:43:25.580221 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:43:29 crc kubenswrapper[4859]: I0120 09:43:29.307512 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-fw68s_cf5440c8-2dc3-4a2e-8a26-11592e8e38ed/cert-manager-controller/0.log" Jan 20 09:43:29 crc kubenswrapper[4859]: I0120 09:43:29.387363 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-n24zn_3e319c6c-4401-4927-be16-26ce497d732a/cert-manager-cainjector/0.log" Jan 20 09:43:29 crc kubenswrapper[4859]: I0120 09:43:29.450950 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-zzsbb_c8536512-4e62-493c-9447-71eb8841c32f/cert-manager-webhook/0.log" Jan 20 09:43:30 crc kubenswrapper[4859]: E0120 09:43:30.576533 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:43:32 crc kubenswrapper[4859]: E0120 09:43:32.574878 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:43:36 crc kubenswrapper[4859]: I0120 09:43:36.574030 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:43:36 crc kubenswrapper[4859]: E0120 09:43:36.574693 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:43:44 crc kubenswrapper[4859]: I0120 09:43:44.160991 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-22h5d_5fd94288-36b5-45a6-8b54-f91cf71c4db8/prometheus-operator/0.log" Jan 20 09:43:44 crc kubenswrapper[4859]: I0120 09:43:44.289357 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8_9cbde904-658d-4fc7-9705-692dc47c50c4/prometheus-operator-admission-webhook/0.log" Jan 20 09:43:44 crc kubenswrapper[4859]: I0120 09:43:44.361872 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf_8b98dcaa-fc77-41c5-afeb-457e039e9818/prometheus-operator-admission-webhook/0.log" Jan 20 09:43:44 crc kubenswrapper[4859]: I0120 09:43:44.423113 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-ft4zn_7698897d-ef4b-4fc1-a25a-634a2abae6c7/operator/0.log" Jan 20 09:43:44 crc kubenswrapper[4859]: I0120 09:43:44.550400 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6bhr5_a4b3a24c-3b9b-4a53-9b19-3ff693862438/perses-operator/0.log" Jan 20 09:43:44 crc kubenswrapper[4859]: E0120 09:43:44.575286 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:43:45 crc kubenswrapper[4859]: E0120 09:43:45.578535 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:43:51 crc kubenswrapper[4859]: I0120 09:43:51.603044 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:43:51 crc kubenswrapper[4859]: E0120 09:43:51.604228 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:43:57 crc kubenswrapper[4859]: E0120 09:43:57.575277 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:43:58 crc kubenswrapper[4859]: I0120 09:43:58.422590 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp_da104415-69df-456f-9d81-1e514fc3249f/util/0.log" Jan 20 09:43:58 crc kubenswrapper[4859]: E0120 09:43:58.574496 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:43:58 crc kubenswrapper[4859]: I0120 09:43:58.639183 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp_da104415-69df-456f-9d81-1e514fc3249f/pull/0.log" Jan 20 09:43:58 crc kubenswrapper[4859]: I0120 09:43:58.657961 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp_da104415-69df-456f-9d81-1e514fc3249f/util/0.log" Jan 20 09:43:58 crc kubenswrapper[4859]: I0120 09:43:58.701285 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp_da104415-69df-456f-9d81-1e514fc3249f/pull/0.log" Jan 20 09:43:58 crc kubenswrapper[4859]: I0120 09:43:58.819456 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp_da104415-69df-456f-9d81-1e514fc3249f/util/0.log" Jan 20 09:43:58 crc kubenswrapper[4859]: I0120 09:43:58.916604 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp_da104415-69df-456f-9d81-1e514fc3249f/pull/0.log" Jan 20 09:43:58 crc kubenswrapper[4859]: I0120 09:43:58.923983 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a6dkhp_da104415-69df-456f-9d81-1e514fc3249f/extract/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.032258 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb_7ce40b55-7671-4f2f-8add-a1b79d87acdb/util/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.148518 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb_7ce40b55-7671-4f2f-8add-a1b79d87acdb/util/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.168913 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb_7ce40b55-7671-4f2f-8add-a1b79d87acdb/pull/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.190672 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb_7ce40b55-7671-4f2f-8add-a1b79d87acdb/pull/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.383213 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb_7ce40b55-7671-4f2f-8add-a1b79d87acdb/util/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.390639 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb_7ce40b55-7671-4f2f-8add-a1b79d87acdb/extract/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.411877 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebtnwb_7ce40b55-7671-4f2f-8add-a1b79d87acdb/pull/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.586160 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd_e76aa968-4292-49a7-9839-f8b1771798dc/util/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.732175 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd_e76aa968-4292-49a7-9839-f8b1771798dc/pull/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.747433 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd_e76aa968-4292-49a7-9839-f8b1771798dc/pull/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.805738 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd_e76aa968-4292-49a7-9839-f8b1771798dc/util/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.907675 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd_e76aa968-4292-49a7-9839-f8b1771798dc/util/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.912267 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd_e76aa968-4292-49a7-9839-f8b1771798dc/pull/0.log" Jan 20 09:43:59 crc kubenswrapper[4859]: I0120 09:43:59.938696 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08k6jjd_e76aa968-4292-49a7-9839-f8b1771798dc/extract/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.043338 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4whtw_c29021fc-bea6-40b1-bb49-440f0225014f/extract-utilities/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.243017 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4whtw_c29021fc-bea6-40b1-bb49-440f0225014f/extract-content/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.259488 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4whtw_c29021fc-bea6-40b1-bb49-440f0225014f/extract-content/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.272327 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4whtw_c29021fc-bea6-40b1-bb49-440f0225014f/extract-utilities/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.433358 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4whtw_c29021fc-bea6-40b1-bb49-440f0225014f/extract-utilities/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.434678 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4whtw_c29021fc-bea6-40b1-bb49-440f0225014f/extract-content/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.660065 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6k2nj_ed631598-71cb-49af-9e49-e6bc8e4b2208/extract-utilities/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.670147 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4whtw_c29021fc-bea6-40b1-bb49-440f0225014f/registry-server/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.822023 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6k2nj_ed631598-71cb-49af-9e49-e6bc8e4b2208/extract-content/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.835480 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6k2nj_ed631598-71cb-49af-9e49-e6bc8e4b2208/extract-utilities/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.843254 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6k2nj_ed631598-71cb-49af-9e49-e6bc8e4b2208/extract-content/0.log" Jan 20 09:44:00 crc kubenswrapper[4859]: I0120 09:44:00.985706 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6k2nj_ed631598-71cb-49af-9e49-e6bc8e4b2208/extract-content/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.030437 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6k2nj_ed631598-71cb-49af-9e49-e6bc8e4b2208/extract-utilities/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.050833 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6k2nj_ed631598-71cb-49af-9e49-e6bc8e4b2208/registry-server/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.166332 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-86ctm_d12f022a-c217-4103-ab89-df75a522d16c/extract-utilities/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.320377 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-86ctm_d12f022a-c217-4103-ab89-df75a522d16c/extract-utilities/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.352078 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-86ctm_d12f022a-c217-4103-ab89-df75a522d16c/extract-content/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.376406 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-86ctm_d12f022a-c217-4103-ab89-df75a522d16c/extract-content/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.563471 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-86ctm_d12f022a-c217-4103-ab89-df75a522d16c/extract-utilities/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.588186 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-86ctm_d12f022a-c217-4103-ab89-df75a522d16c/extract-content/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.609392 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-86ctm_d12f022a-c217-4103-ab89-df75a522d16c/registry-server/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.725475 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fq8mj_51c1ea29-76e0-4ee4-ac99-205c1a42832e/extract-utilities/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.868813 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fq8mj_51c1ea29-76e0-4ee4-ac99-205c1a42832e/extract-content/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.874873 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fq8mj_51c1ea29-76e0-4ee4-ac99-205c1a42832e/extract-utilities/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.914051 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fq8mj_51c1ea29-76e0-4ee4-ac99-205c1a42832e/extract-content/0.log" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.921232 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g8lsq"] Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.922581 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:01 crc kubenswrapper[4859]: I0120 09:44:01.935075 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8lsq"] Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.037558 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-utilities\") pod \"redhat-operators-g8lsq\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.037641 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-catalog-content\") pod \"redhat-operators-g8lsq\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.037700 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vv9r\" (UniqueName: \"kubernetes.io/projected/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-kube-api-access-5vv9r\") pod \"redhat-operators-g8lsq\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.078620 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fq8mj_51c1ea29-76e0-4ee4-ac99-205c1a42832e/extract-utilities/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.122180 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82v9l_515405e5-4607-4dac-84e9-3ac488a0e03d/extract-utilities/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.138348 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-utilities\") pod \"redhat-operators-g8lsq\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.138621 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-catalog-content\") pod \"redhat-operators-g8lsq\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.138737 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vv9r\" (UniqueName: \"kubernetes.io/projected/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-kube-api-access-5vv9r\") pod \"redhat-operators-g8lsq\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.138817 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-utilities\") pod \"redhat-operators-g8lsq\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.139115 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-catalog-content\") pod \"redhat-operators-g8lsq\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.154035 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fq8mj_51c1ea29-76e0-4ee4-ac99-205c1a42832e/registry-server/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.169814 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fq8mj_51c1ea29-76e0-4ee4-ac99-205c1a42832e/extract-content/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.177381 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vv9r\" (UniqueName: \"kubernetes.io/projected/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-kube-api-access-5vv9r\") pod \"redhat-operators-g8lsq\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.239075 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.328359 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82v9l_515405e5-4607-4dac-84e9-3ac488a0e03d/extract-utilities/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.416242 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82v9l_515405e5-4607-4dac-84e9-3ac488a0e03d/extract-content/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.446830 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82v9l_515405e5-4607-4dac-84e9-3ac488a0e03d/extract-content/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.572937 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:44:02 crc kubenswrapper[4859]: E0120 09:44:02.573392 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.596930 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82v9l_515405e5-4607-4dac-84e9-3ac488a0e03d/extract-utilities/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.749007 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82v9l_515405e5-4607-4dac-84e9-3ac488a0e03d/registry-server/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.755939 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8lsq"] Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.787838 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82v9l_515405e5-4607-4dac-84e9-3ac488a0e03d/extract-content/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.895265 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rqpn6_d30b8424-c2b6-4ae5-9790-74198806c882/marketplace-operator/0.log" Jan 20 09:44:02 crc kubenswrapper[4859]: I0120 09:44:02.980655 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-57hgx_d52b569d-e6cf-4afb-bb63-463e375b4e62/extract-utilities/0.log" Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.142216 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-57hgx_d52b569d-e6cf-4afb-bb63-463e375b4e62/extract-content/0.log" Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.185617 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-57hgx_d52b569d-e6cf-4afb-bb63-463e375b4e62/extract-utilities/0.log" Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.249577 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-57hgx_d52b569d-e6cf-4afb-bb63-463e375b4e62/extract-content/0.log" Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.379718 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-57hgx_d52b569d-e6cf-4afb-bb63-463e375b4e62/extract-content/0.log" Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.493487 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dfgvl_125927e0-768a-4ab9-b257-cb655ed95a2c/extract-utilities/0.log" Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.551201 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-57hgx_d52b569d-e6cf-4afb-bb63-463e375b4e62/extract-utilities/0.log" Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.560932 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-57hgx_d52b569d-e6cf-4afb-bb63-463e375b4e62/registry-server/0.log" Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.726912 4859 generic.go:334] "Generic (PLEG): container finished" podID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerID="79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350" exitCode=0 Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.726975 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8lsq" event={"ID":"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc","Type":"ContainerDied","Data":"79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350"} Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.727042 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8lsq" event={"ID":"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc","Type":"ContainerStarted","Data":"1100f0b3ae23a5512052d34b20bfef013b79a5a6cbf50732851f5e524a46f717"} Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.876741 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dfgvl_125927e0-768a-4ab9-b257-cb655ed95a2c/extract-utilities/0.log" Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.889343 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dfgvl_125927e0-768a-4ab9-b257-cb655ed95a2c/extract-content/0.log" Jan 20 09:44:03 crc kubenswrapper[4859]: I0120 09:44:03.943167 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dfgvl_125927e0-768a-4ab9-b257-cb655ed95a2c/extract-content/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.032093 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dfgvl_125927e0-768a-4ab9-b257-cb655ed95a2c/extract-content/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.041406 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dfgvl_125927e0-768a-4ab9-b257-cb655ed95a2c/extract-utilities/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.113111 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k7dlf_5643582d-eb19-4717-8f24-887e783a4533/extract-utilities/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.211451 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dfgvl_125927e0-768a-4ab9-b257-cb655ed95a2c/registry-server/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.301710 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k7dlf_5643582d-eb19-4717-8f24-887e783a4533/extract-content/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.339001 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k7dlf_5643582d-eb19-4717-8f24-887e783a4533/extract-utilities/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.358510 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k7dlf_5643582d-eb19-4717-8f24-887e783a4533/extract-content/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.542915 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zvmjj_80af50d4-59da-4499-b0a4-00da43e07f80/extract-utilities/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.546464 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k7dlf_5643582d-eb19-4717-8f24-887e783a4533/extract-utilities/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.549326 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k7dlf_5643582d-eb19-4717-8f24-887e783a4533/registry-server/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.584485 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k7dlf_5643582d-eb19-4717-8f24-887e783a4533/extract-content/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.733601 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8lsq" event={"ID":"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc","Type":"ContainerStarted","Data":"a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda"} Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.740012 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zvmjj_80af50d4-59da-4499-b0a4-00da43e07f80/extract-content/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.749647 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zvmjj_80af50d4-59da-4499-b0a4-00da43e07f80/extract-content/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.753805 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zvmjj_80af50d4-59da-4499-b0a4-00da43e07f80/extract-utilities/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.903566 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zvmjj_80af50d4-59da-4499-b0a4-00da43e07f80/extract-content/0.log" Jan 20 09:44:04 crc kubenswrapper[4859]: I0120 09:44:04.930359 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zvmjj_80af50d4-59da-4499-b0a4-00da43e07f80/extract-utilities/0.log" Jan 20 09:44:05 crc kubenswrapper[4859]: I0120 09:44:05.009982 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zvmjj_80af50d4-59da-4499-b0a4-00da43e07f80/registry-server/0.log" Jan 20 09:44:05 crc kubenswrapper[4859]: I0120 09:44:05.742811 4859 generic.go:334] "Generic (PLEG): container finished" podID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerID="a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda" exitCode=0 Jan 20 09:44:05 crc kubenswrapper[4859]: I0120 09:44:05.742912 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8lsq" event={"ID":"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc","Type":"ContainerDied","Data":"a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda"} Jan 20 09:44:06 crc kubenswrapper[4859]: I0120 09:44:06.751262 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8lsq" event={"ID":"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc","Type":"ContainerStarted","Data":"a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9"} Jan 20 09:44:10 crc kubenswrapper[4859]: E0120 09:44:10.575563 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:44:11 crc kubenswrapper[4859]: E0120 09:44:11.576248 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.708209 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g8lsq" podStartSLOduration=8.278824188 podStartE2EDuration="10.708189491s" podCreationTimestamp="2026-01-20 09:44:01 +0000 UTC" firstStartedPulling="2026-01-20 09:44:03.728973089 +0000 UTC m=+1518.484989265" lastFinishedPulling="2026-01-20 09:44:06.158338382 +0000 UTC m=+1520.914354568" observedRunningTime="2026-01-20 09:44:06.78954353 +0000 UTC m=+1521.545559766" watchObservedRunningTime="2026-01-20 09:44:11.708189491 +0000 UTC m=+1526.464205667" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.714144 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4v5z8"] Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.715922 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.732628 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4v5z8"] Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.800872 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-catalog-content\") pod \"certified-operators-4v5z8\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.800968 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-utilities\") pod \"certified-operators-4v5z8\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.801015 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4msm\" (UniqueName: \"kubernetes.io/projected/d7a5d507-5be3-480e-b4c8-437796fd0b86-kube-api-access-b4msm\") pod \"certified-operators-4v5z8\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.902451 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-utilities\") pod \"certified-operators-4v5z8\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.902531 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4msm\" (UniqueName: \"kubernetes.io/projected/d7a5d507-5be3-480e-b4c8-437796fd0b86-kube-api-access-b4msm\") pod \"certified-operators-4v5z8\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.902670 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-catalog-content\") pod \"certified-operators-4v5z8\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.903057 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-utilities\") pod \"certified-operators-4v5z8\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.903186 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-catalog-content\") pod \"certified-operators-4v5z8\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:11 crc kubenswrapper[4859]: I0120 09:44:11.936001 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4msm\" (UniqueName: \"kubernetes.io/projected/d7a5d507-5be3-480e-b4c8-437796fd0b86-kube-api-access-b4msm\") pod \"certified-operators-4v5z8\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:12 crc kubenswrapper[4859]: I0120 09:44:12.035027 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:12 crc kubenswrapper[4859]: I0120 09:44:12.243045 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:12 crc kubenswrapper[4859]: I0120 09:44:12.243362 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:12 crc kubenswrapper[4859]: I0120 09:44:12.298476 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:12 crc kubenswrapper[4859]: I0120 09:44:12.484996 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4v5z8"] Jan 20 09:44:12 crc kubenswrapper[4859]: W0120 09:44:12.489511 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7a5d507_5be3_480e_b4c8_437796fd0b86.slice/crio-edb0e514c9428de25b0ad36d74b680d06e33be975ca785b4a2bd3466137e3f96 WatchSource:0}: Error finding container edb0e514c9428de25b0ad36d74b680d06e33be975ca785b4a2bd3466137e3f96: Status 404 returned error can't find the container with id edb0e514c9428de25b0ad36d74b680d06e33be975ca785b4a2bd3466137e3f96 Jan 20 09:44:12 crc kubenswrapper[4859]: I0120 09:44:12.786951 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v5z8" event={"ID":"d7a5d507-5be3-480e-b4c8-437796fd0b86","Type":"ContainerStarted","Data":"edb0e514c9428de25b0ad36d74b680d06e33be975ca785b4a2bd3466137e3f96"} Jan 20 09:44:12 crc kubenswrapper[4859]: I0120 09:44:12.845428 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:13 crc kubenswrapper[4859]: I0120 09:44:13.574535 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:44:13 crc kubenswrapper[4859]: E0120 09:44:13.575123 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:44:13 crc kubenswrapper[4859]: I0120 09:44:13.797581 4859 generic.go:334] "Generic (PLEG): container finished" podID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerID="5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd" exitCode=0 Jan 20 09:44:13 crc kubenswrapper[4859]: I0120 09:44:13.797688 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v5z8" event={"ID":"d7a5d507-5be3-480e-b4c8-437796fd0b86","Type":"ContainerDied","Data":"5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd"} Jan 20 09:44:14 crc kubenswrapper[4859]: I0120 09:44:14.675882 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8lsq"] Jan 20 09:44:14 crc kubenswrapper[4859]: I0120 09:44:14.803228 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g8lsq" podUID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerName="registry-server" containerID="cri-o://a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9" gracePeriod=2 Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.712716 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.809991 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v5z8" event={"ID":"d7a5d507-5be3-480e-b4c8-437796fd0b86","Type":"ContainerStarted","Data":"765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474"} Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.811879 4859 generic.go:334] "Generic (PLEG): container finished" podID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerID="a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9" exitCode=0 Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.811920 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8lsq" event={"ID":"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc","Type":"ContainerDied","Data":"a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9"} Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.811949 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8lsq" event={"ID":"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc","Type":"ContainerDied","Data":"1100f0b3ae23a5512052d34b20bfef013b79a5a6cbf50732851f5e524a46f717"} Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.811967 4859 scope.go:117] "RemoveContainer" containerID="a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.811966 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8lsq" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.830408 4859 scope.go:117] "RemoveContainer" containerID="a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.848158 4859 scope.go:117] "RemoveContainer" containerID="79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.853492 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vv9r\" (UniqueName: \"kubernetes.io/projected/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-kube-api-access-5vv9r\") pod \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.853679 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-catalog-content\") pod \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.855897 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-utilities\") pod \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\" (UID: \"de2cfdb8-1c2e-44e9-9b29-d452b5555fcc\") " Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.857236 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-utilities" (OuterVolumeSpecName: "utilities") pod "de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" (UID: "de2cfdb8-1c2e-44e9-9b29-d452b5555fcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.858463 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-kube-api-access-5vv9r" (OuterVolumeSpecName: "kube-api-access-5vv9r") pod "de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" (UID: "de2cfdb8-1c2e-44e9-9b29-d452b5555fcc"). InnerVolumeSpecName "kube-api-access-5vv9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.877163 4859 scope.go:117] "RemoveContainer" containerID="a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9" Jan 20 09:44:15 crc kubenswrapper[4859]: E0120 09:44:15.877481 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9\": container with ID starting with a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9 not found: ID does not exist" containerID="a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.877507 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9"} err="failed to get container status \"a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9\": rpc error: code = NotFound desc = could not find container \"a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9\": container with ID starting with a45175901fd71cc1200123fa29ddbb2e7155fa2cd4c8813ee32980bda9aa57e9 not found: ID does not exist" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.877536 4859 scope.go:117] "RemoveContainer" containerID="a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda" Jan 20 09:44:15 crc kubenswrapper[4859]: E0120 09:44:15.877742 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda\": container with ID starting with a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda not found: ID does not exist" containerID="a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.877768 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda"} err="failed to get container status \"a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda\": rpc error: code = NotFound desc = could not find container \"a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda\": container with ID starting with a452747c7f159bfd2f4a661b0f9a68c0c69d47505ec3d8978429589553f7adda not found: ID does not exist" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.877783 4859 scope.go:117] "RemoveContainer" containerID="79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350" Jan 20 09:44:15 crc kubenswrapper[4859]: E0120 09:44:15.878114 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350\": container with ID starting with 79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350 not found: ID does not exist" containerID="79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.878166 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350"} err="failed to get container status \"79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350\": rpc error: code = NotFound desc = could not find container \"79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350\": container with ID starting with 79f9ff632cedc1676152882ca28e2e994a933dccc67024d4e5f5450e453c2350 not found: ID does not exist" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.957755 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vv9r\" (UniqueName: \"kubernetes.io/projected/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-kube-api-access-5vv9r\") on node \"crc\" DevicePath \"\"" Jan 20 09:44:15 crc kubenswrapper[4859]: I0120 09:44:15.957826 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:44:16 crc kubenswrapper[4859]: I0120 09:44:16.824741 4859 generic.go:334] "Generic (PLEG): container finished" podID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerID="765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474" exitCode=0 Jan 20 09:44:16 crc kubenswrapper[4859]: I0120 09:44:16.824815 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v5z8" event={"ID":"d7a5d507-5be3-480e-b4c8-437796fd0b86","Type":"ContainerDied","Data":"765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474"} Jan 20 09:44:16 crc kubenswrapper[4859]: I0120 09:44:16.843935 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" (UID: "de2cfdb8-1c2e-44e9-9b29-d452b5555fcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:44:16 crc kubenswrapper[4859]: I0120 09:44:16.870937 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:44:17 crc kubenswrapper[4859]: I0120 09:44:17.042905 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8lsq"] Jan 20 09:44:17 crc kubenswrapper[4859]: I0120 09:44:17.051694 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g8lsq"] Jan 20 09:44:17 crc kubenswrapper[4859]: I0120 09:44:17.585322 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" path="/var/lib/kubelet/pods/de2cfdb8-1c2e-44e9-9b29-d452b5555fcc/volumes" Jan 20 09:44:17 crc kubenswrapper[4859]: I0120 09:44:17.995135 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d8b975ddc-dwpx8_9cbde904-658d-4fc7-9705-692dc47c50c4/prometheus-operator-admission-webhook/0.log" Jan 20 09:44:18 crc kubenswrapper[4859]: I0120 09:44:18.006172 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-22h5d_5fd94288-36b5-45a6-8b54-f91cf71c4db8/prometheus-operator/0.log" Jan 20 09:44:18 crc kubenswrapper[4859]: I0120 09:44:18.014383 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d8b975ddc-wt7gf_8b98dcaa-fc77-41c5-afeb-457e039e9818/prometheus-operator-admission-webhook/0.log" Jan 20 09:44:18 crc kubenswrapper[4859]: I0120 09:44:18.137956 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-ft4zn_7698897d-ef4b-4fc1-a25a-634a2abae6c7/operator/0.log" Jan 20 09:44:18 crc kubenswrapper[4859]: I0120 09:44:18.163954 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6bhr5_a4b3a24c-3b9b-4a53-9b19-3ff693862438/perses-operator/0.log" Jan 20 09:44:18 crc kubenswrapper[4859]: I0120 09:44:18.860448 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v5z8" event={"ID":"d7a5d507-5be3-480e-b4c8-437796fd0b86","Type":"ContainerStarted","Data":"eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c"} Jan 20 09:44:18 crc kubenswrapper[4859]: I0120 09:44:18.879750 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4v5z8" podStartSLOduration=3.889040042 podStartE2EDuration="7.879730469s" podCreationTimestamp="2026-01-20 09:44:11 +0000 UTC" firstStartedPulling="2026-01-20 09:44:13.800271524 +0000 UTC m=+1528.556287700" lastFinishedPulling="2026-01-20 09:44:17.790961951 +0000 UTC m=+1532.546978127" observedRunningTime="2026-01-20 09:44:18.878456358 +0000 UTC m=+1533.634472524" watchObservedRunningTime="2026-01-20 09:44:18.879730469 +0000 UTC m=+1533.635746645" Jan 20 09:44:22 crc kubenswrapper[4859]: I0120 09:44:22.035706 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:22 crc kubenswrapper[4859]: I0120 09:44:22.036216 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:22 crc kubenswrapper[4859]: I0120 09:44:22.111044 4859 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:22 crc kubenswrapper[4859]: I0120 09:44:22.951458 4859 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:23 crc kubenswrapper[4859]: I0120 09:44:23.279559 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4v5z8"] Jan 20 09:44:23 crc kubenswrapper[4859]: E0120 09:44:23.576575 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:44:24 crc kubenswrapper[4859]: E0120 09:44:24.574735 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:44:24 crc kubenswrapper[4859]: I0120 09:44:24.899815 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4v5z8" podUID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerName="registry-server" containerID="cri-o://eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c" gracePeriod=2 Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.581228 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:44:25 crc kubenswrapper[4859]: E0120 09:44:25.581735 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.901620 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.917280 4859 generic.go:334] "Generic (PLEG): container finished" podID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerID="eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c" exitCode=0 Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.917330 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v5z8" event={"ID":"d7a5d507-5be3-480e-b4c8-437796fd0b86","Type":"ContainerDied","Data":"eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c"} Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.917359 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4v5z8" event={"ID":"d7a5d507-5be3-480e-b4c8-437796fd0b86","Type":"ContainerDied","Data":"edb0e514c9428de25b0ad36d74b680d06e33be975ca785b4a2bd3466137e3f96"} Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.917380 4859 scope.go:117] "RemoveContainer" containerID="eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.917520 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4v5z8" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.944648 4859 scope.go:117] "RemoveContainer" containerID="765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.962042 4859 scope.go:117] "RemoveContainer" containerID="5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.991577 4859 scope.go:117] "RemoveContainer" containerID="eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.992036 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-utilities\") pod \"d7a5d507-5be3-480e-b4c8-437796fd0b86\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " Jan 20 09:44:25 crc kubenswrapper[4859]: E0120 09:44:25.992076 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c\": container with ID starting with eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c not found: ID does not exist" containerID="eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.992099 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-catalog-content\") pod \"d7a5d507-5be3-480e-b4c8-437796fd0b86\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.992120 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c"} err="failed to get container status \"eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c\": rpc error: code = NotFound desc = could not find container \"eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c\": container with ID starting with eb46488bcfc71c679b58876adbc2a4aa7c5c1c388d2ffb723c5176079f74553c not found: ID does not exist" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.992145 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4msm\" (UniqueName: \"kubernetes.io/projected/d7a5d507-5be3-480e-b4c8-437796fd0b86-kube-api-access-b4msm\") pod \"d7a5d507-5be3-480e-b4c8-437796fd0b86\" (UID: \"d7a5d507-5be3-480e-b4c8-437796fd0b86\") " Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.992150 4859 scope.go:117] "RemoveContainer" containerID="765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474" Jan 20 09:44:25 crc kubenswrapper[4859]: E0120 09:44:25.992695 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474\": container with ID starting with 765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474 not found: ID does not exist" containerID="765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.992727 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474"} err="failed to get container status \"765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474\": rpc error: code = NotFound desc = could not find container \"765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474\": container with ID starting with 765c50961ca366d5916de018bd43a7186be44e1cb6672a94c97df00d94740474 not found: ID does not exist" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.992745 4859 scope.go:117] "RemoveContainer" containerID="5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.992968 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-utilities" (OuterVolumeSpecName: "utilities") pod "d7a5d507-5be3-480e-b4c8-437796fd0b86" (UID: "d7a5d507-5be3-480e-b4c8-437796fd0b86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:44:25 crc kubenswrapper[4859]: E0120 09:44:25.993182 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd\": container with ID starting with 5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd not found: ID does not exist" containerID="5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd" Jan 20 09:44:25 crc kubenswrapper[4859]: I0120 09:44:25.993249 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd"} err="failed to get container status \"5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd\": rpc error: code = NotFound desc = could not find container \"5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd\": container with ID starting with 5ee9fd9942d449b787e37632e272029a0d7a164df0d30b9c3fa4d9e6a46428cd not found: ID does not exist" Jan 20 09:44:26 crc kubenswrapper[4859]: I0120 09:44:26.000751 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a5d507-5be3-480e-b4c8-437796fd0b86-kube-api-access-b4msm" (OuterVolumeSpecName: "kube-api-access-b4msm") pod "d7a5d507-5be3-480e-b4c8-437796fd0b86" (UID: "d7a5d507-5be3-480e-b4c8-437796fd0b86"). InnerVolumeSpecName "kube-api-access-b4msm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:44:26 crc kubenswrapper[4859]: I0120 09:44:26.094079 4859 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:44:26 crc kubenswrapper[4859]: I0120 09:44:26.094118 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4msm\" (UniqueName: \"kubernetes.io/projected/d7a5d507-5be3-480e-b4c8-437796fd0b86-kube-api-access-b4msm\") on node \"crc\" DevicePath \"\"" Jan 20 09:44:26 crc kubenswrapper[4859]: I0120 09:44:26.103460 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7a5d507-5be3-480e-b4c8-437796fd0b86" (UID: "d7a5d507-5be3-480e-b4c8-437796fd0b86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:44:26 crc kubenswrapper[4859]: I0120 09:44:26.196327 4859 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a5d507-5be3-480e-b4c8-437796fd0b86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:44:26 crc kubenswrapper[4859]: I0120 09:44:26.261599 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4v5z8"] Jan 20 09:44:26 crc kubenswrapper[4859]: I0120 09:44:26.266934 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4v5z8"] Jan 20 09:44:27 crc kubenswrapper[4859]: I0120 09:44:27.592564 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a5d507-5be3-480e-b4c8-437796fd0b86" path="/var/lib/kubelet/pods/d7a5d507-5be3-480e-b4c8-437796fd0b86/volumes" Jan 20 09:44:37 crc kubenswrapper[4859]: I0120 09:44:37.574346 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:44:37 crc kubenswrapper[4859]: E0120 09:44:37.575268 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:44:38 crc kubenswrapper[4859]: E0120 09:44:38.576320 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:44:38 crc kubenswrapper[4859]: E0120 09:44:38.577058 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:44:49 crc kubenswrapper[4859]: I0120 09:44:49.573855 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:44:49 crc kubenswrapper[4859]: E0120 09:44:49.574763 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:44:51 crc kubenswrapper[4859]: E0120 09:44:51.579826 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:44:53 crc kubenswrapper[4859]: E0120 09:44:53.575329 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.146966 4859 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq"] Jan 20 09:45:00 crc kubenswrapper[4859]: E0120 09:45:00.148171 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerName="registry-server" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.148194 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerName="registry-server" Jan 20 09:45:00 crc kubenswrapper[4859]: E0120 09:45:00.148230 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerName="extract-content" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.148243 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerName="extract-content" Jan 20 09:45:00 crc kubenswrapper[4859]: E0120 09:45:00.148276 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerName="extract-utilities" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.148301 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerName="extract-utilities" Jan 20 09:45:00 crc kubenswrapper[4859]: E0120 09:45:00.148320 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerName="extract-utilities" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.148333 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerName="extract-utilities" Jan 20 09:45:00 crc kubenswrapper[4859]: E0120 09:45:00.148351 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerName="extract-content" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.148363 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerName="extract-content" Jan 20 09:45:00 crc kubenswrapper[4859]: E0120 09:45:00.148385 4859 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerName="registry-server" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.148397 4859 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerName="registry-server" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.148606 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a5d507-5be3-480e-b4c8-437796fd0b86" containerName="registry-server" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.148641 4859 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2cfdb8-1c2e-44e9-9b29-d452b5555fcc" containerName="registry-server" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.149386 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.153839 4859 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.164155 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq"] Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.169013 4859 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.217299 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzjv\" (UniqueName: \"kubernetes.io/projected/bc35739e-c549-4d51-94ac-da65bead61fc-kube-api-access-zvzjv\") pod \"collect-profiles-29481705-knzjq\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.217363 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc35739e-c549-4d51-94ac-da65bead61fc-config-volume\") pod \"collect-profiles-29481705-knzjq\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.217399 4859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc35739e-c549-4d51-94ac-da65bead61fc-secret-volume\") pod \"collect-profiles-29481705-knzjq\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.318965 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc35739e-c549-4d51-94ac-da65bead61fc-secret-volume\") pod \"collect-profiles-29481705-knzjq\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.319194 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzjv\" (UniqueName: \"kubernetes.io/projected/bc35739e-c549-4d51-94ac-da65bead61fc-kube-api-access-zvzjv\") pod \"collect-profiles-29481705-knzjq\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.319284 4859 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc35739e-c549-4d51-94ac-da65bead61fc-config-volume\") pod \"collect-profiles-29481705-knzjq\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.320985 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc35739e-c549-4d51-94ac-da65bead61fc-config-volume\") pod \"collect-profiles-29481705-knzjq\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.328550 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc35739e-c549-4d51-94ac-da65bead61fc-secret-volume\") pod \"collect-profiles-29481705-knzjq\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.348639 4859 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzjv\" (UniqueName: \"kubernetes.io/projected/bc35739e-c549-4d51-94ac-da65bead61fc-kube-api-access-zvzjv\") pod \"collect-profiles-29481705-knzjq\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.477548 4859 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:00 crc kubenswrapper[4859]: I0120 09:45:00.931391 4859 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq"] Jan 20 09:45:00 crc kubenswrapper[4859]: W0120 09:45:00.933684 4859 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc35739e_c549_4d51_94ac_da65bead61fc.slice/crio-1d1b4267a5c882b914e1554cff64c1d1c4026be3d1b23a37bd6f377421b2adc7 WatchSource:0}: Error finding container 1d1b4267a5c882b914e1554cff64c1d1c4026be3d1b23a37bd6f377421b2adc7: Status 404 returned error can't find the container with id 1d1b4267a5c882b914e1554cff64c1d1c4026be3d1b23a37bd6f377421b2adc7 Jan 20 09:45:01 crc kubenswrapper[4859]: I0120 09:45:01.216044 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" event={"ID":"bc35739e-c549-4d51-94ac-da65bead61fc","Type":"ContainerStarted","Data":"dc08a351f45f6b77599a55546f44b8e482fe7628c6f1060b4aeceff1b08fb5d9"} Jan 20 09:45:01 crc kubenswrapper[4859]: I0120 09:45:01.216105 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" event={"ID":"bc35739e-c549-4d51-94ac-da65bead61fc","Type":"ContainerStarted","Data":"1d1b4267a5c882b914e1554cff64c1d1c4026be3d1b23a37bd6f377421b2adc7"} Jan 20 09:45:01 crc kubenswrapper[4859]: I0120 09:45:01.239370 4859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" podStartSLOduration=1.239346682 podStartE2EDuration="1.239346682s" podCreationTimestamp="2026-01-20 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:45:01.237912778 +0000 UTC m=+1575.993928984" watchObservedRunningTime="2026-01-20 09:45:01.239346682 +0000 UTC m=+1575.995362868" Jan 20 09:45:01 crc kubenswrapper[4859]: I0120 09:45:01.578364 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:45:01 crc kubenswrapper[4859]: E0120 09:45:01.581814 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:45:02 crc kubenswrapper[4859]: I0120 09:45:02.224648 4859 generic.go:334] "Generic (PLEG): container finished" podID="bc35739e-c549-4d51-94ac-da65bead61fc" containerID="dc08a351f45f6b77599a55546f44b8e482fe7628c6f1060b4aeceff1b08fb5d9" exitCode=0 Jan 20 09:45:02 crc kubenswrapper[4859]: I0120 09:45:02.224726 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" event={"ID":"bc35739e-c549-4d51-94ac-da65bead61fc","Type":"ContainerDied","Data":"dc08a351f45f6b77599a55546f44b8e482fe7628c6f1060b4aeceff1b08fb5d9"} Jan 20 09:45:03 crc kubenswrapper[4859]: E0120 09:45:03.575035 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:45:03 crc kubenswrapper[4859]: I0120 09:45:03.637114 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:03 crc kubenswrapper[4859]: I0120 09:45:03.774307 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvzjv\" (UniqueName: \"kubernetes.io/projected/bc35739e-c549-4d51-94ac-da65bead61fc-kube-api-access-zvzjv\") pod \"bc35739e-c549-4d51-94ac-da65bead61fc\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " Jan 20 09:45:03 crc kubenswrapper[4859]: I0120 09:45:03.774559 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc35739e-c549-4d51-94ac-da65bead61fc-secret-volume\") pod \"bc35739e-c549-4d51-94ac-da65bead61fc\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " Jan 20 09:45:03 crc kubenswrapper[4859]: I0120 09:45:03.774626 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc35739e-c549-4d51-94ac-da65bead61fc-config-volume\") pod \"bc35739e-c549-4d51-94ac-da65bead61fc\" (UID: \"bc35739e-c549-4d51-94ac-da65bead61fc\") " Jan 20 09:45:03 crc kubenswrapper[4859]: I0120 09:45:03.775318 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc35739e-c549-4d51-94ac-da65bead61fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "bc35739e-c549-4d51-94ac-da65bead61fc" (UID: "bc35739e-c549-4d51-94ac-da65bead61fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 09:45:03 crc kubenswrapper[4859]: I0120 09:45:03.779874 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc35739e-c549-4d51-94ac-da65bead61fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bc35739e-c549-4d51-94ac-da65bead61fc" (UID: "bc35739e-c549-4d51-94ac-da65bead61fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 09:45:03 crc kubenswrapper[4859]: I0120 09:45:03.780007 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc35739e-c549-4d51-94ac-da65bead61fc-kube-api-access-zvzjv" (OuterVolumeSpecName: "kube-api-access-zvzjv") pod "bc35739e-c549-4d51-94ac-da65bead61fc" (UID: "bc35739e-c549-4d51-94ac-da65bead61fc"). InnerVolumeSpecName "kube-api-access-zvzjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:45:03 crc kubenswrapper[4859]: I0120 09:45:03.876265 4859 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc35739e-c549-4d51-94ac-da65bead61fc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:45:03 crc kubenswrapper[4859]: I0120 09:45:03.876322 4859 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc35739e-c549-4d51-94ac-da65bead61fc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:45:03 crc kubenswrapper[4859]: I0120 09:45:03.876343 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvzjv\" (UniqueName: \"kubernetes.io/projected/bc35739e-c549-4d51-94ac-da65bead61fc-kube-api-access-zvzjv\") on node \"crc\" DevicePath \"\"" Jan 20 09:45:04 crc kubenswrapper[4859]: I0120 09:45:04.260345 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" event={"ID":"bc35739e-c549-4d51-94ac-da65bead61fc","Type":"ContainerDied","Data":"1d1b4267a5c882b914e1554cff64c1d1c4026be3d1b23a37bd6f377421b2adc7"} Jan 20 09:45:04 crc kubenswrapper[4859]: I0120 09:45:04.260414 4859 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d1b4267a5c882b914e1554cff64c1d1c4026be3d1b23a37bd6f377421b2adc7" Jan 20 09:45:04 crc kubenswrapper[4859]: I0120 09:45:04.260926 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481705-knzjq" Jan 20 09:45:05 crc kubenswrapper[4859]: E0120 09:45:05.578115 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:45:06 crc kubenswrapper[4859]: I0120 09:45:06.275763 4859 generic.go:334] "Generic (PLEG): container finished" podID="c4b6d50e-d475-4dd8-a01e-b59f66c2cc96" containerID="d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e" exitCode=0 Jan 20 09:45:06 crc kubenswrapper[4859]: I0120 09:45:06.275844 4859 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jjhxj/must-gather-9nq8b" event={"ID":"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96","Type":"ContainerDied","Data":"d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e"} Jan 20 09:45:06 crc kubenswrapper[4859]: I0120 09:45:06.277005 4859 scope.go:117] "RemoveContainer" containerID="d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e" Jan 20 09:45:06 crc kubenswrapper[4859]: I0120 09:45:06.922401 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jjhxj_must-gather-9nq8b_c4b6d50e-d475-4dd8-a01e-b59f66c2cc96/gather/0.log" Jan 20 09:45:12 crc kubenswrapper[4859]: I0120 09:45:12.574410 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:45:12 crc kubenswrapper[4859]: E0120 09:45:12.576427 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:45:13 crc kubenswrapper[4859]: I0120 09:45:13.798352 4859 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jjhxj/must-gather-9nq8b"] Jan 20 09:45:13 crc kubenswrapper[4859]: I0120 09:45:13.798924 4859 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jjhxj/must-gather-9nq8b" podUID="c4b6d50e-d475-4dd8-a01e-b59f66c2cc96" containerName="copy" containerID="cri-o://78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e" gracePeriod=2 Jan 20 09:45:13 crc kubenswrapper[4859]: I0120 09:45:13.806050 4859 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jjhxj/must-gather-9nq8b"] Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.236377 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jjhxj_must-gather-9nq8b_c4b6d50e-d475-4dd8-a01e-b59f66c2cc96/copy/0.log" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.241270 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jjhxj/must-gather-9nq8b" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.327959 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmfg5\" (UniqueName: \"kubernetes.io/projected/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-kube-api-access-kmfg5\") pod \"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96\" (UID: \"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96\") " Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.328054 4859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-must-gather-output\") pod \"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96\" (UID: \"c4b6d50e-d475-4dd8-a01e-b59f66c2cc96\") " Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.336326 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-kube-api-access-kmfg5" (OuterVolumeSpecName: "kube-api-access-kmfg5") pod "c4b6d50e-d475-4dd8-a01e-b59f66c2cc96" (UID: "c4b6d50e-d475-4dd8-a01e-b59f66c2cc96"). InnerVolumeSpecName "kube-api-access-kmfg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.343459 4859 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jjhxj_must-gather-9nq8b_c4b6d50e-d475-4dd8-a01e-b59f66c2cc96/copy/0.log" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.343734 4859 generic.go:334] "Generic (PLEG): container finished" podID="c4b6d50e-d475-4dd8-a01e-b59f66c2cc96" containerID="78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e" exitCode=143 Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.343779 4859 scope.go:117] "RemoveContainer" containerID="78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.343814 4859 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jjhxj/must-gather-9nq8b" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.358460 4859 scope.go:117] "RemoveContainer" containerID="d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.379408 4859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c4b6d50e-d475-4dd8-a01e-b59f66c2cc96" (UID: "c4b6d50e-d475-4dd8-a01e-b59f66c2cc96"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.386585 4859 scope.go:117] "RemoveContainer" containerID="78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e" Jan 20 09:45:14 crc kubenswrapper[4859]: E0120 09:45:14.387068 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e\": container with ID starting with 78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e not found: ID does not exist" containerID="78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.387107 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e"} err="failed to get container status \"78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e\": rpc error: code = NotFound desc = could not find container \"78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e\": container with ID starting with 78a9e4fb092e47820e57fd618c86fd1c7f0e2edc4763bb9531ad6aed819cef8e not found: ID does not exist" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.387132 4859 scope.go:117] "RemoveContainer" containerID="d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e" Jan 20 09:45:14 crc kubenswrapper[4859]: E0120 09:45:14.387466 4859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e\": container with ID starting with d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e not found: ID does not exist" containerID="d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.387498 4859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e"} err="failed to get container status \"d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e\": rpc error: code = NotFound desc = could not find container \"d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e\": container with ID starting with d0c870e778b6fe98e09ed11890b7c1a5c8ae547f49971487d6481160a125bb1e not found: ID does not exist" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.429428 4859 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 20 09:45:14 crc kubenswrapper[4859]: I0120 09:45:14.429469 4859 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmfg5\" (UniqueName: \"kubernetes.io/projected/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96-kube-api-access-kmfg5\") on node \"crc\" DevicePath \"\"" Jan 20 09:45:15 crc kubenswrapper[4859]: E0120 09:45:15.581403 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:45:15 crc kubenswrapper[4859]: I0120 09:45:15.581832 4859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b6d50e-d475-4dd8-a01e-b59f66c2cc96" path="/var/lib/kubelet/pods/c4b6d50e-d475-4dd8-a01e-b59f66c2cc96/volumes" Jan 20 09:45:19 crc kubenswrapper[4859]: E0120 09:45:19.576238 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:45:24 crc kubenswrapper[4859]: I0120 09:45:24.573977 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:45:24 crc kubenswrapper[4859]: E0120 09:45:24.574953 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:45:29 crc kubenswrapper[4859]: E0120 09:45:29.576636 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:45:31 crc kubenswrapper[4859]: E0120 09:45:31.576113 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:45:39 crc kubenswrapper[4859]: I0120 09:45:39.574453 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:45:39 crc kubenswrapper[4859]: E0120 09:45:39.575316 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:45:40 crc kubenswrapper[4859]: E0120 09:45:40.577122 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:45:43 crc kubenswrapper[4859]: E0120 09:45:43.576406 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:45:53 crc kubenswrapper[4859]: E0120 09:45:53.582009 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:45:54 crc kubenswrapper[4859]: I0120 09:45:54.573623 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:45:54 crc kubenswrapper[4859]: E0120 09:45:54.574052 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:45:57 crc kubenswrapper[4859]: E0120 09:45:57.579446 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:46:04 crc kubenswrapper[4859]: E0120 09:46:04.577845 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:46:08 crc kubenswrapper[4859]: I0120 09:46:08.573452 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:46:08 crc kubenswrapper[4859]: E0120 09:46:08.574519 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:46:09 crc kubenswrapper[4859]: E0120 09:46:09.575971 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:46:17 crc kubenswrapper[4859]: E0120 09:46:17.577003 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:46:22 crc kubenswrapper[4859]: I0120 09:46:22.574004 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:46:22 crc kubenswrapper[4859]: E0120 09:46:22.574689 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:46:23 crc kubenswrapper[4859]: E0120 09:46:23.575099 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:46:28 crc kubenswrapper[4859]: E0120 09:46:28.577515 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:46:35 crc kubenswrapper[4859]: I0120 09:46:35.580194 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:46:35 crc kubenswrapper[4859]: E0120 09:46:35.581408 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:46:35 crc kubenswrapper[4859]: E0120 09:46:35.582826 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:46:39 crc kubenswrapper[4859]: E0120 09:46:39.577176 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:46:46 crc kubenswrapper[4859]: E0120 09:46:46.576125 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:46:47 crc kubenswrapper[4859]: I0120 09:46:47.574020 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:46:47 crc kubenswrapper[4859]: E0120 09:46:47.574286 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:46:53 crc kubenswrapper[4859]: E0120 09:46:53.576750 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:47:00 crc kubenswrapper[4859]: I0120 09:47:00.574179 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:47:00 crc kubenswrapper[4859]: E0120 09:47:00.575165 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:47:00 crc kubenswrapper[4859]: E0120 09:47:00.576620 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:47:06 crc kubenswrapper[4859]: E0120 09:47:06.576965 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:47:12 crc kubenswrapper[4859]: I0120 09:47:12.574380 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:47:12 crc kubenswrapper[4859]: E0120 09:47:12.575195 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:47:13 crc kubenswrapper[4859]: I0120 09:47:13.581087 4859 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 09:47:13 crc kubenswrapper[4859]: E0120 09:47:13.640163 4859 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 09:47:13 crc kubenswrapper[4859]: E0120 09:47:13.640354 4859 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwpzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-pr2xt_service-telemetry(7ab9b124-5d3b-4d56-b1c8-ab68152a2e39): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Jan 20 09:47:13 crc kubenswrapper[4859]: E0120 09:47:13.641571 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:47:17 crc kubenswrapper[4859]: E0120 09:47:17.576358 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:47:25 crc kubenswrapper[4859]: E0120 09:47:25.582912 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:47:26 crc kubenswrapper[4859]: I0120 09:47:26.574687 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:47:26 crc kubenswrapper[4859]: E0120 09:47:26.575409 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:47:30 crc kubenswrapper[4859]: E0120 09:47:30.594584 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:47:37 crc kubenswrapper[4859]: E0120 09:47:37.576123 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:47:39 crc kubenswrapper[4859]: I0120 09:47:39.574182 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:47:39 crc kubenswrapper[4859]: E0120 09:47:39.575041 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:47:43 crc kubenswrapper[4859]: E0120 09:47:43.576026 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:47:50 crc kubenswrapper[4859]: E0120 09:47:50.577771 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:47:53 crc kubenswrapper[4859]: I0120 09:47:53.574184 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:47:53 crc kubenswrapper[4859]: E0120 09:47:53.574680 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" Jan 20 09:47:54 crc kubenswrapper[4859]: E0120 09:47:54.574707 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-fzgkn" podUID="778d674c-d99e-4c83-9781-9a772e7a7c2a" Jan 20 09:48:01 crc kubenswrapper[4859]: E0120 09:48:01.578327 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-pr2xt" podUID="7ab9b124-5d3b-4d56-b1c8-ab68152a2e39" Jan 20 09:48:06 crc kubenswrapper[4859]: I0120 09:48:06.574190 4859 scope.go:117] "RemoveContainer" containerID="8ad3eb2fdf3390b2ede582e298bf038cf1a07e63c15d9e3b26c1ecba27f67af3" Jan 20 09:48:06 crc kubenswrapper[4859]: E0120 09:48:06.575223 4859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-knvgk_openshift-machine-config-operator(dab032ef-85ae-456c-b5ea-750bc1c32483)\"" pod="openshift-machine-config-operator/machine-config-daemon-knvgk" podUID="dab032ef-85ae-456c-b5ea-750bc1c32483" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515133647541024456 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015133647542017374 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015133643663016517 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015133643663015467 5ustar corecore